Book
Book
Book
Practical
Business Process
Engineering
B E TA E D I T I O N 1.1
tufte-latex.googlecode.com
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in com-
pliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/
LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an “as is” basis, without warranties or conditions of any kind, either
express or implied. See the License for the specific language governing permissions and limitations under
the License.
Index 181
List of Figures
1.1 Pit stop for the Scuderia Ferrari during a Formula 1 race. 23
1.2 The McLaren team’s telemetry unit at the Formula 1. 24
1.3 The process innovation cycle. 26
1.4 Schematic diagram of real-time procedure adopted by Spevi. 28
1.5 Part of the manually completed event log file. 29
1.6 As-is model of current process generated by ProM from event log
and exported to Yasper. 29
1.7 To-be model of process modelled and optimised in Yasper for export
to Yawl. 30
2.1 Solution Prob. 2.1. The objective tree is subdivided in four objective
groups (in this example we have only two). They are written in a suc-
cinct way, few words, and with a verb. In this example, we found ob-
jectives belong to the two first objective groups. 33
2.2 Solution Prob. 2.2. The output tree represents the main two strate-
gies of the Bank: to maintain the current outputs and to evolve to
electronic services. 34
2.3 Solution Prob. 2.2. A general objective tree model of an organisation
has four group of objectives related by causality. We found objectives
belong to the the first three levels. 35
2.4 Solution Prob. 2.2. The initiative allocation diagram model corresponds
to the description of two initiatives described in the text 35
2.5 Solution Prob. 2.3. The objective tree shows new objects, namely four
objectives which there not exist in the original text. Thus, the model
is simplified and made more readable. 36
2.6 Solution Prob. 2.4. The objective tree differentiates causes and effects.
The diagram can be read top-down (from effects to causes) by an-
swering the question how or bottom-up (from causes to effects) by
answering the question why. 37
2.7 Solution Prob. 2.5. The objective tree of the company Codelco. This
is again an example of the objectives identification of a complete com-
pany, i. e. we try to identify four levels of objectives. In this case, we
only found objectives related to the process performance space. 38
6
2.26 The best decision according the Expected Value (EV) is warehouse
management system with value $11.2. 57
2.27 Expected Opportunity Loss. The best decision is integrated mate-
rial requirement planning with value $1.2. 58
2.28 The decision tree and identification of the best decision under per-
fect information for calculating the Expected Value of Perfect Infor-
mation (EVPI) is 22.0 − 11.2 = $10.8. 58
2.29 Decision tree of the problem of the machine owner with the addi-
tional information of the consultant. The strategies are warehouse
management system in case that the consultant recommends con-
tract (C) and integrated material requirement planning in case that
the consultant recommends not contract (NC). 62
2.30 Banco las Américas is determined to improve university students’
service. The bank has identified four criteria university students value
and three projects to prioritise 65
2.31 Pair comparison matrix according to the criterion “improve account
visibility” (C1 ). An Excel sheet can be downloaded from the ahp.xls. 71
2.32 Pair comparison matrix and normalised matrix for our four criteria. 72
2.33 This step consists on to computing the weights of each project ac-
cording to the corresponding weights of the projects for each crite-
rion and the weights of all criteria. 72
2.34 The last step consists on the compute the consistency index. RI is
determined according to the Table 2.23. In general, the degree of con-
sistency is satisfactory if CI/IR is less than 0.10, which is this case. 73
3.1 This poster contains the objects and their definitions of the standard
BPMN 2.0 published by the OMG Group (https://www.omg.org/).
The poster was elaborated by Berliner BPM–Offensive in 2015. You
can access this poster in Spanish on the link poster-ES. 80
3.2 Hardware Retailer. This is the process Shipment Process of a Hard-
ware Retailer. We are trying to understand the behaviour of differ-
ent gateways in the process flow. 83
3.3 Order Process for Pizza. This example is about Business-To-Business-
Collaboration. Because we want to model the interaction between
a pizza customer and the vendor explicitly. 85
3.4 Fulfilment and Procurement. This example tries to explain the be-
haviour of boundary events attached to an activity. Let us observe
that the exclusive gateway that opens both paths is not closed by an-
other gateway. We say that XOR-Split is mandatory (so the XOR-Join
gateway is optional). 86
3.5 Stock Level Management. This example tries to explain the behaviour
of boundary events attached to an activity. It is similar to the former
example. 87
8
3.31 A Simple DMN Table. It describes the decision about the creditwor-
thiness of a customer. It has four rules and two inputs. Every DMN
Table requires only one value for the output. 109
3.32 DMN Table for the Hit Policies. It presents the rules for selecting
rules in a DMN Table. 109
3.33 A Simple DMN Table. It describes the decision about the creditwor-
thiness of a customer. It has four rules and two inputs. Every DMN
Table requires only one value for the output. 110
3.34 The extensive Table With Simulation. It presents an extensive ver-
sion of the original DMN Table of Fig. 3.30. The left table contains
the partitions of the outputs values. The right one contains the cal-
culation for representative customers. 111
3.35 The Applicant Risk Rating. It classifies applicant according the age
and medical history. 111
3.36 The Applicant Risk Rating. Extensive table of the DMN Table of Fig.
3.35. We show the formula in the Numbers sheet. 112
3.37 Personal Loan Compliance. It classifies applicant for loan the per-
sonal rating, card balance, and education loan balance. 112
3.38 Personal Loan Compliance. The extensive table of the DMN Table
of Fig. 3.37. The best hit policy is ANY. 112
3.39 The Applicant Risk Rating (2nd Version). It classifies applicant ac-
cording applicant age and the medical history. 112
3.40 The Applicant Risk Rating (2nd Version). The extensive table of the
table of Fig. 3.39. Because there is at least an input tuple (see employee
6) that corresponds to many different outputs, the best hit policy is
FIRST. 113
3.41 Special Discount. It assigns discount to a customer. 113
3.42 Special Discount. Extensive table of the DMN Table of Fig. 3.41. The
best hit policy is COLLECT. 113
3.43 Holidays. Rules for calculating the number of holidays days. 113
3.44 Holidays. The extensive table of the DMN Table of Fig. 3.43. Because
the company will apply a sum of total days for the holidays. The best
hit policy is COLLECT. 114
3.45 Holidays (2nd Version). It assigns number of days for holidays to
an employee. 114
3.46 Holidays (2nd Version). The extensive table of the DMN Table of
Fig. 3.45. Because the company will apply a sum of total days for the
holidays. The best hit policy is COLLECT. 115
3.47 Student Financial Package Eligibility. It assigns discount to study
fees according the student GPA, student extra-curricular activities,
and student national honor society membership. 115
11
3.48 Student Financial Package Eligibility. The extensive table of the DMN
Table of Fig. 3.47. Assume that the university will assign only one
discount. Because the student 11 will obtain two discounts, the best
hit policy will be FIRST. 115
3.49 Final Price. The DMN Table contains only three rules according to
the description of the case. 116
3.50 Final Price. the extensive table with the corresponding simulation.
Let us observe that the case silver and true has not answer from
the DMN Table, then it is not possible to apply any hit policy. Thus
the table must be modified for applications. 116
3.51 Pre-qualification. The DMN Table contains five rules according to
the description of the case. 117
3.52 Pre-qualification. the extensive table with the corresponding sim-
ulation. Let us observe that there are cases where there are more of
an output, but they are the same. Therefore, the hit policy ANY is
applicable. 117
3.53 Holidays. The DMN Table contains five rules according to the de-
scription of the case. 118
3.54 Holidays. the extensive table with the corresponding simulation. Let
us observe that there are cases where there are more of an output.
Although, they are the same, they must be summed. Therefore, the
hit policy COLLECT is the best. 118
3.55 Nobel Prize Process. This model describes the process of the Medicine
Nobel Prize. Five participants are part of the collaboration diagram. 119
4.22 As-is order processing model. All tasks are manual and have addi-
tional time because the staff member does not work continuously.
The client’s actions have not been modelled. It was determined that
the frequency of arrival of cases is every 2 hours as well as the times
of each task and the frequencies of each path. 150
4.23 Simplified data model of invoice and payment. The data and infor-
mation are grouped in a tree model that will serve as the basis for
the ERM model. 151
4.24 Decision rule for the feasibility decision. This rule considers total amount,
product availability, and quantity. 151
4.25 To-be order processing model. We automate the decision rule and
a user task will clarify the order, a clerk will make the decision if the
request is possible or impossible. A 24-hour intermediate timer that
represents the time it takes the user to start the next task. 151
6.1 Modelo descubierto a partir del event log de la tabla 6.1 165
6.2 Modelo descubierto a partir del event log de la tabla 6.3 165
6.3 Modelo descubierto a partir del event log de la tabla 6.4 166
6.4 Modelo con actividades duplicadas 166
6.5 Modelo con constructo free-choice que no puede ser descubierto con
el algoritmo α 167
6.6 Modelo con constructo free-choice que si puede ser descubierto con
el algoritmo α 168
6.7 Modelo con short-loops. La tarea C puede ejecutarse una cantidad ar-
bitraria de veces 168
6.8 Planilla electrónica para calcular los Pearson’s correlation coefficient 175
6.9 Red de relaciones entre originadores cuyos arcos representan distan-
cias de Pearson’s correlation Coefficient mayores que 0 176
6.10 Jerarquía obtenida según el criterio joint activities medido con la métrica
Pearson’s correlation coefficient 176
6.11 Red simple de relaciones entre 4 originadores 176
6.12 Red simple de relaciones entre 5 originadores 177
List of Tables
2.1 Direct decision patterns for the initiatives evaluation. Let us note
that the number of the pattern corresponds to the number of nature
variables. 46
2.2 Implicit decision patterns examples for the initiatives evaluation.
The relationships between the variables or nodes are not explicit. As
in the case of explicit patterns, the number of the pattern corresponds
to the number of nature variables. 47
2.3 The CEO of the Palace Hotel in Lima is evaluating two Information
Technology projects. The economic benefits of each initiative depend
of whether the projects are successful or not. 48
2.4 Regret Table for the example Hotel Palace. This Table shows the
opportunity loss of bad decisions. If the the CEO of the Palace Ho-
tel decides for standard software, the opportunity loss will be $0 if
the natures decides for very successful and $60 for moderate success.
The criterion for selecting the best decision will be minimax. 49
2.5 The Business Process Department of Asta Co. is trying to decide which
alternative project to implement. Each project has customer impact
categorised from A (best) to F (worst), which depends on the imple-
mentation type. 51
2.6 Rich and Co. is considering to implement one of four different pro-
cess projects. Each project has implicit impact on the variation of cy-
cle time of their processes. 52
2.7 Problem Rich and Co.. This table is the opportunity loss measured in
terms of percentage of increasing of the cycle time in case of each de-
cision and different level of process performance. 53
2.8 Microcomp is a Chilean-based software company. It is planning to
invest in different projects. The unitary cost reduction of its prod-
uct depends on different scenarios. 55
16
3.1 Definition of the credit score rating categories. The output of this ta-
ble is used as input in the DMN Table pre-qualification. 116
5.1 Fields generated by the task Write response for workflow of Fig. 5.1 153
5.2 Trigger with a form for workflow of Fig. 5.2 154
5.3 Roles and tasks for workflow of Fig. 5.2 154
5.4 Trigger with a form for workflow of Fig. 5.3 155
5.5 Roles and tasks for workflow of Fig. 5.3 155
5.6 manual Trigger for workflow of Fig. 5.4 156
5.7 Roles and tasks for workflow of Fig. 5.4 156
5.8 Fields processed by tasks for workflow of Fig. 5.4 157
5.9 Form Trigger for workflow of Fig. 5.5 158
5.10 Roles and tasks for workflow of Fig. 5.5 158
5.11 Fields processed by tasks for workflow of Fig. 5.5 159
5.12 Form Trigger for workflow of Fig. 5.6 160
5.13 Roles and tasks for workflow of Fig. 5.6 160
5.14 Fields processed by tasks for workflow of Fig. 5.6 161
Dedicated to my wife.
Introduction
1.1 Summary
race. How does a team win the F1? What factors are key to its suc-
cess? The obvious answer is high speed, and the same is true of a
company’s ability to satisfy its customers. But how do you achieve
that high speed on the racetrack?
A large part of any race is played out in the pit stops, where a
whole series of complex manoeuvres have to be executed simultane-
ously in a matter of seconds (see Figure 1.1). This requires coordina-
tion supported by sophisticated communication systems. Working
quickly and precisely is absolutely fundamental, with nothing left to
chance. Speed and accuracy are of the essence.
The brutal reality of the F1 is evident at the end of the race. A
stroke of luck and the team grabs a winning position; an engine
breakdown or technical error and they end up well back in the field.
A huge effort goes into the search for mechanical and aerodynamic
improvements, for every second saved is precious. Discouragement
is a constant danger as the teams pull out all the stops to achieve an
optimum speed and score points.
But that’s not all. Figure 1.2 shows the McLaren team’s telemetry
data unit, whose job is to monitor vehicle behaviour from second
to second. Sensors measure engine temperature, oil composition,
the rpm-speed ratio and other vehicle performance parameters, and
the unit’s engineers closely analyse the information to determine
exactly what’s happening and why at every moment. They, too, leave
nothing to chance, deciding when the driver takes a pit stop and
what adjustments to make in the few seconds the car is off the track.
Speed is everything, of course, but speed is only possible if the pit
crew can count on a rigorous and detailed performance monitoring
system to back them up.
It is these performance measurements that make it possible to ad-
the real time enterprise 25
just the vehicle to the changing conditions of the race. Without such
any-time, any-place information captured by the data-acquisition
technology, fine-tuning the original strategy would be out of the
question. Fortunately, in the business world the technology for imple-
menting rapid strategy adjustments is available. How well it works
will depend on the following three factors:
One further question for top management: where does the real
challenge lie? In our view the heart of the matter is choosing how
fast to carry out process innovation, which means determining the
speed of the redesign-implementation-monitoring cycle. A 1960 Volk-
swagen Beetle may have been an economical alternative in its time,
but today it would be a slow solution. A Formula 1 vehicle is a fan-
tastic choice for speed, but it’s likely to be expensive, not to mention
risky. Today, however, companies who go for the Formula 1 option
have the technology they need at their disposal; the test of their man-
agerial abilities will be to elect a process innovation speed that will
meet the challenges of their environment.
Not only does each team have its own method, they’re also using two
different set of tools.
Clearly, what this team needs to win the race is coordination. And
that implies a tool that can model and analyse processes and coordi-
nate and monitor their execution. It must also be able to effectively
articulate the requirements of the business using the implemented
technology. Fortunately, companies today can choose from a variety
of tools and techniques that provide solid support for these tasks.
In the area of process design, for instance, there are a series of
graphical and markup languages (in XML format) that assist com-
munication across different providers and design tools. One of the
best-known graphical languages is BPMN (see http://bpmn.org),
promoted by the non-profit Business Process Management Initiative
consortium that brings together the most prominent process design
software firms. All of these languages contain totally integrated user-
friendly dynamic simulation tools for calculating capabilities and
determining cycle times and costs. For example, giant software com-
pany SAP offers ARIS (Architecture of Integrated Information Sys-
tems) in its NetWeaver suite, Oracle’s BPEL also provides ARIS for
modelling, IBM recommends using WebSphere for the design, simu-
lation and integration of information systems, and Microsoft has its
BizTalk platform that allows systems to be integrated independently
of the platform they’re built on.
Major business software providers have also given their de facto ap-
proval to the use of Service Oriented Architecture (SOA), a standard
that allows information systems to be designed using software ser-
vices from different systems. Even more important, this technology
makes it possible to orchestrate these services during execution as
a business process. This integration and orchestration technology is
the real time enterprise 27
for each of them are listed in Table 1.1. The most likely path (prob-
ability of 60%) begins with the customer places order event and ends
with email to co-worker (without clarification) while the least probable
one (2.6%) begins with customer places order and ends with email to
customer (with clarification).
Problem 1.1. Explain why the title of article is Real Time Enterprise
and what are the difference with Business Process Management Process.
Solution 1.3 BPM ensures that right people execute right tasks at
the right moment. What is then the advantage of a RTE? Answer:
The advantage is that the execution of the process that manages the
processes (i.e., BPM) is carried out at high speed. In other words, the
changes of the operational processes are very fast. That is, RTE gives
flexibility and agility to the company for changing their structures
and processes.
Problem 1.4. Explain also the role of technicians and that of senior
management.
Solution 1.4 Technicians require the technical skills and tools to de-
sign, analyse, optimise, implement, and monitor processes efficiently.
Senior management, instead, should ensure the processes operate at
the required speed in order to meet the needs of ever more demand-
ing and changing customers.
2
Strategic Process Management
2.1 Introduction
Problems
Problem 2.1. Identify objects called objectives and the relationships
between them. Model the objective tree:
The industrial laundry service known as LavaBien lost important cus-
tomers during the last year. The owner asked his loyal customers about
the causes of the contract terms (which himself considered safe). The
reason given is that the competence is offering the same service for
a lower price. Apparently, the competence can make a better offer
as they invested in most advanced wash technology. With this infor-
mation, he consulted his suppliers about purchasing new machinery,
however, he had to resign to it due to the high investment level this
meant. This is why he sought another solution. He devised the way
to offer a differentiated service to his customers, which consisted in
lowering the re-position time in a 50% and a reduced price in 20% to
the customers.
Solution 2.2 The output tree of Banco las Américas is shown in Fig.
2.2. The output tree represents the main two strategies of the Bank:
to maintain the current outputs and to evolve to electronic services.
The objective tree was modelled considering the text without fur-
ther interpretation. Like the example 2.1, the model has four group
of objectives related by causality. We found objectives belong to the
the first three levels: main (financial) objectives, customer related
objectives, and processes related objectives. It is shown on Fig. 2.3
The initiative allocation diagram is the description of two initia-
tives described in the text. It is shown on Fig. 2.4.
The following problem was taken from 1 , page 37. 1
John S. Hammond, Ralph L. Keeney,
and Howard Raiffa. Smart choices: A
Problem 2.3. Some parents have to choice the best option for their practical guide to making better decisions.
Harvard Business Review Press, 2015
children. Because they consider a difficult decision, they write a list
about what they want for their children. The list is as following:
Solution 2.3 The solution is the Fig. 2.5. This example is a very
interesting, because it introduces a particular concept of causality,
namely the relation is part of, is contained in, or composition.
This is a very recurrent relationship between objects. The meaning
is clear, but it can be interpreted as well as causal relationship in the
following sense: different parts of an object are cause of the existence such
object.
Another interesting idea in this example is the criterion used for
grouping objectives. We propose four general objectives which were
strategic process management 37
• Minimise emissions.
Problem 2.5. The strategy declaration year 2016 of the Chilean cop-
per concern Codelco can be found in the document downloaded from
the link codelco-mision on page 19. Model the objective tree.
38 practical business process engineering
Solution 2.5 The solution is the Fig. 2.7. This is again an example
of the objectives identification of a complete company, i. e. we will
try to identify four levels of objectives. In this case, we only found
objectives related to process performance space. We do not identify
neither financial, nor customer, nor resources related objectives. The
solution is immediate.
Problem 2.6. Identify and interpret the objects of Fig. 2.8. Model the
value added chain and the product tree.
Solution 2.6 The models are obtained immediately from Fig. 2.8 and
expressed in a standard form in Fig. 2.9 (value added chain) and in
Fig. 2.10 (output tree).
The following problem consider the concept of Value Added Chain
popularised by M. Porter in 19853 . 3
M.E. Porter. Competitive advantage: cre-
ating and sustaining superior performance.
New York: Free Press, 1985
Figure 2.11: value added chain
concept proposed and popu-
larised by Michael Porter in
1985.
Problem 2.7. Consider the Fig. 2.11. It represents the value added
chain, concept described and popularised by Michael Porter in 1985.
value added chain is understood as a set of activities that deliver a prod-
uct valued by customers. Looking at the figure, answer the following
questions:
1. What are the differences between primary and secondary activi-
ties? What is meant by core processes? What are core competences?
Solution 2.7
1. Primary activities are (according the Fig. 2.11) service, market-
ing & sales, outbound logistics, operations, and inbound logis-
tics. While secondary activities (or support activities) are firm
40 practical business process engineering
Problem 2.8. This problem continues from the problem 2.6. With the
figures 2.8 and 2.12 model the extended value added chain and the
objective tree. Propose a reasonable project and model the respective
model initiative allocation diagram.
Problem 2.9. Consider the known company Ikea and review its Web
page on ikea-webpage4 . Let us observe that the main menu has the 4
Consulted on October 10, 2023.
42 practical business process engineering
Solution 2.9 Two outputs trees of the company Ikea obtained of its
web page. The left model represents a grouping associated to the
productive system and less linked to the customer. Although the
difference is not absolutely clear, the right model is more associated
with the customer. Each model expresses an orientation of the com-
pany strategy and are shown in the Fig. 2.16.
Solution 2.10 The value added chain of Fig. 2.17 is easy to model.
The object harvesting technology of the figure was not considered as
process. The model is shown in Fig. 2.18.
44 practical business process engineering
2. For the relevant output identify the client and their require-
ment(s). Model the product allocation diagram. Here too, one
must argue with evidence, why the requirement chosen for that
output is relevant. We call (O1 − E) to this customer requirement
for denoting that it is a type of objective of the external client that
we will demand to the complete process. This objective O1 − E
coincides now with a internal performance objective (O1 − I or
denoted simply as O1 ), i. e. what the customer requires from the
process (O1 − E) is the same what we will require as process per-
formance (O1 − I).
3. Model value added chain (VAC). This model indicates the pro-
cesses and sub processes that produce the relevant output of
choice. It is usually necessary to dis-aggregate to a level 3 or 4
to ensure a good analysis.
• A set of decision variables that take different values that de- decision variable
pending on the decision-making agent. In its simplest form it
consists of the decision to invest or not to invest in an initiative.
• A set of variables of nature (or chance variable) that do not de- nature variables
pend on the decision maker. In general, these decision variables
are related to: the performance of the process (caused by the intro-
duction of the initiative) and the customer satisfaction (caused by a
change in the performance of the process).
pattern decision
meaning Table 2.2: Implicit decision
nr. tree
patterns examples for the
initiative changes the initiatives evaluation. The rela-
1
customer satisfaction. tionships between the variables
initiative changes the or nodes are not explicit. As in
2
organisation performance. the case of explicit patterns, the
Examples of implicit patterns are shown on Table 2.2. Implicit rela- number of the pattern corre-
tionships probably hide underlying cause-effect relationships, unlike sponds to the number of nature
explicit relations that try to show clear and detailed relationships. On variables.
the other hand, the model that describes in detail all the relationships
may be more precise, but more difficult to understand. A fair balance
between precision and complexity is recommended.
Now, we will propose a set of problems and exercises.
Problem 2.12. The CEO of the Palace Hotel in Lima is evaluating two
Information Technology projects for the passenger administration.
She has two alternatives: buying and implementation a standard soft-
ware or to continue with the current system. An economic evaluation
of both alternatives should consider that the alternative standard soft-
ware has some risks. The economic benefits of both alternatives are
shown in Table 2.3. Select the best decision, using the following deci-
sion criteria: (a) maximax, (b) maximin, (c) minimax regret, (d) Hur-
wicz (α = 0, 4), and (e) Laplace (or equal likelihood). (f) Recognise
number of pattern of the decision and whether explicit or implicit.
Economic Benefits
Table 2.3: The CEO of the
Decision Very Successful Moderate Success Palace Hotel in Lima is eval-
Standard software $120.000 $60.000 uating two Information Tech-
Legacy system 85.000 85.000 nology projects. The economic
benefits of each initiative de-
nature, and consequence random value. The random variables of pend of whether the projects
nature obviously do not depend on the decision maker. The conse- are successful or not.
quence random value depends on the combination of decisions and
the variables of nature. The random variables of nature can be inter-
preted as external decisions (or decisions of nature). It is customary
to model these variables on the tree nodes, where the decision vari-
ables are square nodes and nature variables are circles. The values
of the consequence variables corresponding each value combination
of decisions and variables nature are modelled as triangles. The arcs
between each node (or variable) represent the values taken by the
decision and nature variables respectively.
Opportunity Loss
Table 2.4: Regret Table for the
Decision Very Successful Moderate Success example Hotel Palace. This
Standard software $0 $60.000 Table shows the opportunity
Legacy system 0 0 loss of bad decisions. If the
the CEO of the Palace Hotel
To determine the table, proceed as follows. Suppose the agent decides for standard software,
decides on standard software. If nature plays very successful, the the opportunity loss will be $0
opportunity loss is $0, but if nature plays moderate success, the loss if the natures decides for very
will be $60. In the event that the agent decides on the legacy system, successful and $60 for moderate
the opportunity loss will be $0 regardless of the nature’s decision. success. The criterion for se-
The final decision table is shown in Table 2.4. It is suggested to use lecting the best decision will be
minimax.
50 practical business process engineering
Solution 2.13 (a) and (b). The solution follows the same procedure
that the above one, excepting that the consequence variable takes
strategic process management 51
Customer Impact
Table 2.5: The Business Process
Decision Intern Outsourcing Mix Department of Asta Co. is try-
I B D D ing to decide which alternative
II C B F project to implement. Each
III F A C project has customer impact
categorised from A (best) to F
non-numeric values, therefore they cannot be neither summed nor (worst), which depends on the
multiplied by another number. But its values have an order. There- implementation type.
fore, we will assign an ordered scale to each value, i. e., 5 for A, 4 for
B until 0 for F, thus the scale has the usual order. With this consid-
eration, we obtain the corresponding decision tress for the criteria
maximax and maximin showed Fig. 2.21 and on Fig. 2.22 respec-
tively.
and (e) equal likelihood. Determine (f) the decision problem pattern
of the problem and if it is explicit or implicit.
Solution 2.14 The solution follows the same procedure that the
problem 2.12.
(a) and (b). The solution using the criteria maximax and maximin
are showed on Fig. 2.23 and 2.24 respectively. Let us observe that the
corresponding decisions are IV and III respectively.
(c) The application of the criterion minimax regret requires to
determine the table of opportunity loss which is shown on Table 2.7.
For example the opportunity loss the the decision I and low process
performance is 44 − (−32) = 76%. With the Table 2.7 we modelled
the decision tree of Fig. 2.25, which is applied the criterion minimax.
The best decision is I.
.
(d) Now we will apply the Hurwicz criterion with α = 0.6. Re-
member that α represents an optimism factor, then for each decision
it multiplies the best process performance of Table 2.6 by 0.6 and the
worst by 0.4. That is, the value for decision I is 0.44α − 0.32(1 − α) =
13.60%, for the decision II is 17.40%, for the decision III the result is
strategic process management 53
24.00%, and for the decision IV is 32.40%. Therefore, the best decision
is the project IV.
(e) The Laplace criterion is applied as before, but with the factor
0.5. Thus, for decision I is equal to 0.44 × 0.5 − 0.32 × 0.5 = 6.00%, for
the decision II is 6.00%, for the decision III the cycle time reduction is
15.50%, and for the decision IV is 16.50%. Therefore, the best decision
is the project IV.
(f) The pattern of this decision problem is of type 1 explicit.
Scenario
Table 2.8: Microcomp is a
Project Good Regular Bad Chilean-based software com-
Downsizing $21,7 $19,1 $15,2 pany. It is planning to invest in
Process automation 19,0 18,5 17,6 different projects. The unitary
Changes strategy levels 19,2 17,1 14,9 cost reduction of its product
Process outsourcing 22,5 16,8 13,8 depends on different scenarios.
Product redesign 25,0 21,2 12,5
Reduction (%)
Table 2.9: The governmental
Decision I II III IV V agency for environment protec-
Cost reduction 13 8 19 17 9 tion is evaluating to invest in
Automation 9 18 8 19 22 business process projects. The
Legacy systems 16 26 5 13 24 implicit effects on the pollution
Standard software 8 14 13 20 7 each project is calculated de-
Quality management system 18 30 22 3 2 pending of the strategy policy
Business intelligence initiative 5 8 18 13 26 for the implementations.
system with value 30%. (b) maximin: The decision is cost reduction
or automation with value 8%. (c) minimax regret: The decision is
cost reduction with opportunity loss of 11%. (d) Hurwicz (α = 0.7):
quality management system with value 21.6%. (e) Laplace: quality
management system with value 16%.
Economic Benefits
Table 2.10: A machine shop
Contract No contract owner is attempting to decide
Decision 0.4 0.6 whether to invest. The profit or
Warehouse management system $40.000 $ − 8.000 loss from each invest and the
Workflow management system 20.000 4.000 probabilities are associated with
Integrated material requirement planning 12.000 10.000 each contract.
Solution 2.17 The difference between this problem and the previ-
ous ones is that, we have a probability distribution of the random
variables of nature. In other words, we have imperfect information
about the value that the variable of nature will take. The imperfect
information is modelled through a probability distribution. In this
case, the variable of nature is the economic benefit that depends on
whether or not a contract is obtained. This variable has a probability
of 0.4 for the positive occurrence and a probability of 0.6 in the nega-
tive case.5 With this distribution we will use two decision criteria that 5
Probabilities are sometimes confused
will evaluate each decision and calculate the value of having perfect with Hurwicz’s α optimism factor. Not
only they are conceptually different,
information in the following paragraphs. but they are applied differently. The
(a) The first decision criterion is the Expected Value (EV). The α-factor is multiplied to the maximum
value, while the probabilities are associ-
EV is the weighted value of the random variables and the decision ated with the event (either maximum or
rule will be that decision with maximum EV. As before, we will also not).
use the decision tree to apply this criterion. The solution is shown in expected value
Fig. 2.26. Unlike the previous decision trees, each variable of nature
(represented by a yellow circle) has a probability assigned to each
value it can take. Note that we will assume that the probability dis-
tributions are discrete. As before, the value below the yellow circle is
strategic process management 57
the weighted average of all the branches, while the value below the
decision nodes is the value that such decision makes choosing the
branch with the highest value. Thus, the decision of this problem is
warehouse management system with EV of $11.2.
Opportunity Loss
Table 2.11: The opportunity
Contract No contract loss table for the problem ma-
Decision 0.4 0.6 chine shop owner. This table is
Warehouse management system $0 $48 obtained from the Table 2.10.
Workflow management system 0 16
Integrated material requirement planning 0 2
To make this computation we will use the decision tree again, but,
unlike the actual sequence of events (in which the decision occurs be-
fore the instantiation of variables), we will assume that the instantia-
tion of variables occurs before the decision. This tree is shown in Fig.
2.28. Observe in this figure that the decisions are maximisation and
that it has an expected value of $22.0. This would be the expected
value if we had perfect information, if we subtract the EV obtained
before, we obtain a value of $22.0 − EV = 22.0 − 11.2 = $10.8, which
corresponds to the Expected Value of Perfect Information (EVPI). expected value of perfect information
That is, the EVPI is the value that the decision maker is willing to pay
to count perfect information in a context of a decision problem with
imperfect information.
Thus, we have solved the questions of the problem.
So far we have discussed the concept of the expected value of
perfect information. We noted that if the perfect information could
be obtained regarding which state of nature would occur in the fu-
ture (before the decision), the agent could obviously make better
decisions. Although perfect information about the future is rare, it
is often possible to gain some amount of additional (imperfect) in-
formation that will improve decisions, particularly with the help of
engineering or consulting projects.
Suppose that the investor of the machine owner has decided to
hire a professional economic analyst who will provide additional
information about future economic conditions and thus about the
contract or not. The analyst is constantly studying the economy and
politic situation, and the results of this research are what the investor
will be invested. The consultant will provide the machine owner with
a report predicting one of two outcomes. The report will be either
contract, indicating that good economic conditions are most likely
to prevail in the future and that the contract is probable, or negative,
indicating that poor economic conditions will probably occur and the
contract will not occur.
Now, based on the analyst’s past record in forecasting future eco-
nomic conditions, the machine owner has determined conditional
probabilities of the different report outcomes, given the occurrence conditional probabilities
of each state of nature in the future. We will use the following nota-
tions to express these conditional probabilities: c: the contract will
happen, nc: the contract will not happen, C: the consultant reports
that the contract will happen, and NC: the consultant reports that the
contract will occur. Let us remember that the probability assigned by
the owner machine that the contract occur is P(c) = 0.40 and that the
contract does not occur P(nc) = 0.60 as we saw in the Problem 2.17.
This probability is called a priori probability, i. e., the probability a a priori probability
priori to have the consultant recommendation.
60 practical business process engineering
In the formula, we have calculated the joint probability P(c, C ) = joint probability
0.32. In addition, the probability that the consultant reports contract
P(C ) = 0.38, thus the probability that the consultant reports not
contract is P( NC ) = 1 − P(C ) = 1 − 0.38 = 0.62.
Remember that the prior probability that the contract will occur
in the future was estimated by the agent in P(c) = 0.40. However,
by obtaining the additional information of a positive report from
the analyst, the investor can revise the prior probability of good
conditions to a P(c|C ) = 0.84 probability that contract will occur
given the consultant will also report that the contract will happen.
A substantial change from 0.44 to 0.84! The remaining posterior
(revised) probabilities are P(nc|C ) = 0.16, P(nc| NC ) = 0.87 and
P(c| NC ) = 0.13.
Mathematically, the problem is as follows: given the a priori prob-
abilities P(c) and P(nc) and the conditional probabilities P(C |c),
P( NC |c), P(C |nc), and P( NC |nc), we calculated (by using the Bayes’
rule) the a posteriori probabilities P(c|C ), P(nc|C ), P(c| NC ), P(nc| NC )
and the probabilities of the report values P(C ) and P( NC ). Recall
that the a priori probability is the probability without the information
consultant, while the conditional probabilities are obtained from the
past experience or expertise of the consultant.
One of the difficulties that can occur with additional information
is that as the size of the problem increases (i. e., as we add more deci-
strategic process management 61
recommendation C
Table 2.12: Computation of the
( a) (b) (c) = ( a) · (b) (c)/(d)
a priori conditional joint a posteriori
a posteriori probabilities. The
probability probability probability probability tables corresponds to the case
contract P(c) = 0.40 P(C |c) = 0.80 P(c, C ) = 0.32 P(c|C ) = 0.84 with recommendation con-
no contract P(nc) = 0.60 P(C |nc) = 0.10 P(nc, C ) = 0.06 P(nc|C ) = 0.16 tract (C) and not contract (NC)
sum (d) P(C ) = 0.38 respectively. In red the given
recommendation NC values.
contract P(c) = 0.40 P( NC |c) = 0.20 P(c, NC ) = 0.08 P(c| NC ) = 0.13
no contract P(nc) = 0.60 P( NC |nc) = 0.90 P(nc, NC ) = 0.54 P(nc| NC ) = 0.87
sum (d) P( NC ) = 0.62
Problem 2.18. Regarding the below problem and given the condi-
tional probabilities: the consultant reports C given that the contract
will, in fact, happen is P(C |c) = 0.80 and reports NC given that
the contract will happen is P( NC |nc) = 0.90. (a) What is the eco-
nomic value of this additional information, i. e. the Expected Value
of Sample Information (EVSI)? (b) Let us suppose that another con- expected value of sample information
sultant (consultant-3) is offering another report: C: contract, U: un-
defined, and NC: not contract. The machine owner has a register of
qualifications of the consultant-3 with the following conditional prob-
abilities: P(C |c) = 0.9, P(U |c) = 0.05, P( NC |c) = 0.05, P(C |nc) = 0.05
and P(C |nc) = 0.1, P(U |nc) = 0.2, P( NC |nc) = 0.70. What is the EVSI
of the consultant-3. (c) Which consultant is better?
62 practical business process engineering
Solution 2.18 (a) For solving the problem, we will use the computa-
tion of the a posteriori probabilities and the probabilities of reports.
Then we will model the decision tree showed in Fig. 2.29. Let us
observe the events sequence: first the occurrence of the reports repre-
sented by a random variable, then the decision according the report,
and finally the occurrence of the nature variable that takes values
contract or not contract.
(b) According the decision tree, the Expected Value of this addi-
tional information is $18.64. The strategies are warehouse manage-
ment system in case that the consultant recommends contract (C) and
integrated material requirement planning in case that the consultant
recommends not contract (NC). Then the Expected Value of Sample
Information is EVSI = $18.64 − EV = $18.64 − $11.2 = $7.44, that is,
the value that the machine owner is willing to pay for the consultant
is $7.44. Recalling that the EVPI is $10.80, then the EVSI is 70% of
the EVPI. This fraction is called informational efficiency of the informational efficiency
consultant.
that the machine owner is willing to pay for the consultant-3 is $8.66.
Recalling that the EVPI is $10.80, then the EVSI is 80% of the EVPI.
(c) Comparing the informational efficiency of consultant (70%) and
consultant-3 (80%), we can easily see that the consultant-3 is better
than the consultant.
Implementation Probability
Table 2.13: The environment
Policy
agency reviewed the impacts
I 0.4 on the pollution reduction of
II 0.1 its projects and assigned new
III 0.2 probabilities to the implementa-
IV 0.2 tion policies.
V 0.1
Solution 2.19 (a) EV: The best solution is quality management sys-
tem with value 15.4%. (b) EOL: The best solution is quality manage-
ment system with value −14.6%. (c) New probability distribution:
The best solution is legacy system 20.4%
Market Conditions
Table 2.14: Miramar Co. is
Favourable Stable Unfavourable going to introduce one of
Decision 0.2 0.7 0.1 three new improvements to
Web page $120.000 $70.000 $ − 30.000 its products. The market con-
More personal contact 60.000 40.000 20.000 ditions (favourable, stable, or
Automation 35.000 30.000 30.000 unfavourable) will determine
the profit or loss the company
Solution 2.20 (a) EV: The best solution is web page with $70. (b) realises.
EOL: The best solution is web page with −$50. (c) EVPI = $6. (d)
EVSI = $0.22, i. e. an informational efficiency of 4% app.
improve account
Table 2.16: Matrix of pair com-
visibility (C1 )
parison between projects A, B,
project A B C
and C according to criterion C1 .
A 1 3 2
B 1/3 1 1/5
C 1/2 5 1
This matrix shows that project A is “moderately important” than
project B under criterion C1 . Likewise, C is “essential of strong im-
portance” over B. Note that the diagonal of the matrix is 1, which
logically indicates “equally preferred”. The resulting matrices ac-
cording to the other criteria: improve credit process cycle time (C2 ),
reduce processing costs of students insurance (C3 ) and extend offer to
foreign study travel (C4 ), are shown below:
1 6 1/3 1 1/3 1 1 1/3 1/2
1/6 1 1/9 , 3 1 7 , and 3 1 4 .
3 9 1 1 1/7 1 2 1/4 1
improve account
Table 2.17: The first step is to
visibility (C1 )
add the preference values in
project A B C
each column of the pair com-
A 1 3 2 parison matrix.
B 1/3 1 1/5
C 1/2 5 1
11/6 9 16/5
improve account
Table 2.18: The second step is
visibility (C1 )
to compute the standardised
project A B C
matrix.
A 6/11 3/9 5/8
B 2/11 1/9 1/16
C 3/11 5/9 5/16
improve account
Table 2.19: The next step is to
visibility (C1 )
average the values in each row.
project A B C average
A 0.5455 0.3333 0.6250 0.5012
B 0.1818 0.1111 0.0625 0.1185
C 0.2727 0.5556 0.3125 0.3803
Sum 1.0000
68 practical business process engineering
Let us observe that the sum of the elements of the vector is 1. In the
same way we calculate the vectors according to the other criteria:
0.2819 0.1780 0.1561
0.0598 , 0.6850 , and 0.6196 .
0.3803 0.1360 0.2243
criteria
Table 2.20: Matrix represent-
project C1 C2 C3 C4
ing of weights of each project
A 0.5012 0.2819 0.1790 0.1561 according to each criterion.
B 0.1185 0.0598 0.6850 0.6196
C 0.3803 0.6583 0.1360 0.2243
criteria C1 C2 C3 C4
Table 2.21: Pair comparison
C1 1 1/5 3 4 matrix to determine the relative
C2 5 1 9 7 importance or weighting of the
C3 1/3 1/9 1 2 criteria.
C4 1/4 1/7 1/2 1
The following table (2.22) shows the normalised table and the
average vector that is obtained in the last column.
Clearly, “improve credit process cycle time” (C2 ) is the highest
priority criterion, “improve account visibility” (C1 ) is the second, etc.
The next step is to combine the matrices obtained.
strategic process management 69
criteria C1 C2 C3 C4 average
Table 2.22: Normalised matrix
C1 0.1519 0.1375 0.2222 0.2857 0.1933 for criteria with row average.
C2 0.7595 0.6878 0.6667 0.5000 0.6535
C3 0.0506 0.0764 0.0741 0.1429 0.0860
C4 0.0380 0.0983 0.0370 0.0714 0.0612
1.0000
thus the multiplication of the matrix by the vector yields the final
weights of each project:
0.1993
0.5012 0.2819 0.1790 0.1561 0.6535 0.5314
0.1185 0.0598 0.6850 0.6196 · = 0.3091 .
0.0860
0.3803 0.6583 0.1360 0.2243 0.1595
0.0612
n 2 3 4 5 6 7 8 9 10
Table 2.23: You can determine
RI 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.51 an acceptable level of incon-
sistency by purchasing CI at
In general, the degree of consistency is satisfactory if CI/IR is less
a random index, RI, whose
than 0.10, which is this case. If this does not happen, there is a high
index is generated randomly in
probability of inconsistency and the result may not be significant.
pairs. This indicator has been
generated in the laboratory and
AHP in Excel sheets. Figure 2.31 shows the pairwise comparison
depends on the number n of
matrix according to the criterion “improve account visibility” (C1 ).
items to be compared.
Let’s note that the format of the pairwise comparison is a fraction.
Fractions can be formatted in Excel by choosing the sequence of
Format-Cells-Number-Fraction commands (the Excel sheet can down-
loaded from the following ahp.xls). Figure 2.31 is self-explanatory.
Let us remember that the average of the rows represents the vector
of preferences according to the “improve account visibility” (C1 ) cri-
terion. To complete this stage in AHP, it is necessary to calculate the
preference vectors for the other three criteria: C2 , C3 and C4 .
The next step is to develop the pair comparison matrix and nor-
malised matrix for our four criteria. The spreadsheet that corre-
sponds to this step is shown in Figure 2.32. These two matrices are
constructed in a similar way to the two matrices in Figure 2.31.
The following step consists on to computing the weights of each
project according the corresponding weights of the projects for each
criterion and the weights of all criteria. The matrix multiplication by
a vector is on Figure 2.33. Let us observe that we are copied a column
as row for facilitating the multiplication of a matrix by a vector using
the Excel command =SUMAPRODUCTO() as is shown in the Figure.
Finally, figure 2.34 shows the worksheet to evaluate the degree
72 practical business process engineering
Solution For solving the problem we use the Excel Sheet ahp.xls
and enter the corresponding values. Let us observe that this problem
has 2 criteria and 3 alternatives. In the sheet we have the table for
obtaining the weights in case of three alternatives, but not in case of
2, which, however, is trivial to calculate. The first step is to calculate
the weights of each alternative under both criteria. According to the
Table 2.24 the weights are: (0.2946, 0.6486, 0.0714) under the criterion
“profits“ and (0.1222, 0.2299, 0.6479) under the criterion “potential
growth”. Now, we calculate the weights for each criterion. Since
‘oct2019” considers “moderate importance” the company’s potential
growth over profits. According to the Table 2.15 this preference cor-
responds to the value 3, which gives weights 0.75 and 0.25 for profits
and growth potential respectively (why?). We let this computation to
74 practical business process engineering
the students. The third step consists on to compute the final weights
of the alternatives: for A is 0.2946 × 0.75 + 0.1222 × 0.25 = 0.2515, for
B is 0.5439, and for C is 0.2155 calculated similarly. The last step of
the AHP method is to determinate if each prioritisation is consistent,
which is a question of the following problem.
Solution 2.22 This corresponds to the last step of the AHP method.
We will compute the consistency index for each pair comparison.
For the first table, which represents the comparison of alternatives
according the first criterion gives CI/RI = 0.0701. The second ta-
ble (the comparison of alternatives according the second criterion)
gives CI/RI = 0.0032. Finally, the consistency of criteria is obvious,
because they are only two options. Thus, we say that weights of al-
ternatives are consistent, because each index CI/RI gives a value less
than 0.10.
Solution 2.25 This case have again 3 criteria and 3 alternatives and
is solved as the previous problem. In fact, the final weights of each
alternative are (0.2027, 0.5060, 0.2913). Therefore, the most convenient
is the fund blue.
benefits PE cost
Table 2.31: Pair comparison ma-
project A B C A B C A B C
trix for the criterion of Problem
A 1 4 6 1 2 1/3 1 1/3 1/4 2.28.
B 1/4 1 2 1/2 1 1/6 3 1 1/2
C 1/6 1/2 1 3 6 1 4 2 1
among the three projects and maintains that three criteria must be
considered: benefits, probability of success (PE), and costs. The pair-
wise comparison matrices for each project and for the criteria are as
tables 2.31 and 2.32. Sort projects using the AHP.
Problem 2.29. The investment program selects the experts who ap-
ply to the program according to three criteria: Years of experience,
number of projects, and client reports. The program must select three
applicants for the last two vacancies—the chief wonders which appli-
cants to choose. The tables 2.33 and 2.34 show pair comparisons and
data associated to each expert. Help to select the best expert.
3.1 Introduction
Conversations Choreographies
Activities
Events
Participant A Participant A Participant A Start Intermediate End
A Conversation defines a set of
A Task is a unit of work, the job to be logically related message exchanges. Choreography Call
Event Sub-Process
Event Sub-Process
Sub-Choreography
Non-Interrupting
performed. When marked with a symbol When marked with a symbol it Task Choreography
Boundary Non-
Task
Interrupting
Interrupting
Interrupting
it indicates a Sub-Process, an activity that can indicates a Sub-Conversation, a
Boundary
Throwing
Standard
Standard
Catching
Participant B Participant B
be refined. compound conversation element. Participant B
A Choreography Task
A Call Conversation is a wrapper for a Participant C A Call Choreography is a
represents an Interaction
A Transaction is a set of activities that logically globally defined Conversation or Sub- (Message Exchange) A Sub-Choreography contains wrapper for a globally
Transaction belong together; it might follow a specified Conversation. A call to a Sub-conversation between two Participants. a refined choreography with defined Choreography Task
transaction protocol. is marked with a symbol. several Interactions. or Sub-Choreography. A call None: Untyped events,
to a Sub-Choreography is indicate start point, state
A Conversation Link connects marked with a symbol. changes or final states.
An Event Sub-Process is placed into a Process or Conversations and Participants.
Event
Sub-Process. It is activated when its start event
gets triggered and can interrupt the higher level Multiple
Choreography Diagram Message: Receiving and
sending messages.
Sub-Process Participants Marker Participant A
process context or run in parallel (non-
interrupting) depending on the start event. Conversation Diagram denotes a set of Initiating
Timer: Cyclic timer events,
Participant A points in time, time spans or
Participants of the Message
Conversation same kind. (decorator) Choreography timeouts.
A Call Activity is a wrapper for a globally defined Task
Pool Participant B Escalation: Escalating to
Call Activity Task or Process reused in the current Process. A Participant A
(Black Box) an higher level of
call to a Process is marked with a symbol. Participant B
Message Choreography responsibility.
Task Conditional: Reacting to
a decorator depicting Participant A
the content of the Participant B changed business conditions
Activity Markers Task Types message. It can only Choreography or integrating business rules.
Pool Multi Instance Pool be attached to Task
Markers indicate execution Types specify the nature of Response Link: Off-page connectors.
(Black Box) (Black Box) Choreography Tasks. Participant C
behavior of activities: the action to be performed: Message Two corresponding link events
Sub-Conversation (decorator) equal a sequence flow.
Sub-Process Marker Send Task Participant B Error: Catching or throwing
Participant C named errors.
Loop Marker Receive Task
Collaboration Diagram Cancel: Reacting to cancelled
Parallel MI Marker User Task (Black transactions or triggering
Box) cancellation.
Pool
Gateways Data
Subprocess Error Event
Multi Instance
Lane
Start End
Event Event condition Task (Parallel)
Exclusive Gateway When splitting, it routes the sequence flow to exactly A Data Object represents information flowing
one of the outgoing branches. When merging, it awaits Link Parallel Event Subprocess
through the process, such as business
one incoming branch to complete before triggering the Intermediate Multiple documents, e-mails, or letters.
outgoing flow. Event Intermediate
Event Call Activity Send Task
Conditional Error End
Event-based Gateway Is always followed by catching events or receive tasks. A Collection Data Object represents a
Start Event Event
Sequence flow is routed to the subsequent event/task Message collection of information, e.g., a list of order
Exclusive Parallel
which happens first. Gateway Gateway
End Event items.
Parallel Gateway When used to split the sequence flow, all outgoing A Data Input is an external input for the
branches are activated simultaneously. When merging Input
entire process.A kind of input parameter.
parallel branches it waits for all incoming branches to
Swimlanes
Pool
Lane
(instantiate)
Pool
Task
branches are activated. All Each occurrence of a subsequent Message Flow symbolizes A Data Association is used to associate data
active incoming branches must event starts a new process information flow across elements to Activities, Processes and Global
complete before merging. instance. organizational boundaries. Tasks.
Pools (Participants) and Lanes The order of message
Message flow can be attached
Complex Gateway Parallel Event-based Gateway represent responsibilities for exchanges can be A Data Store is a place where the process can
to pools, activities, or
Complex merging and (instantiate) activities in a process. A pool specified by combining read or write data, e.g., a database or a filing
message events. The Message
branching behavior that is not The occurrence of all subsequent or a lane can be an message flow and cabinet. It persists beyond the lifetime of the
Flow can be decorated with Data Store
captured by other gateways. events starts a new process organization, a role, or a sequence flow. process instance.
an envelope depicting the
instance. system. Lanes subdivide pools
content of the message.
or other lanes hierarchically. © 2011
4. Consider the group of objects events in the poster. Read the def-
modelling in bpmn and dmn 81
6. Consider the group of objects data in the poster. Read the def-
initions each object corresponding to this group and answer the
following questions: What is the difference between a data store
and others data type. Must be a data store object only an electronic
database?
Solution 3.19 The standard BPMN 2.0 has several types of models
such that: conversations, choreographies, and collaboration dia-
grams. There is an abstract example of a collaboration diagram in
the centre of the poster of Fig. 3.1.
4. According the poster of Fig. 3.1, the events can be classified be-
tween two large groups: The event type and the position of an
event in the process model. There are thirteen event types, from
none to terminate, and there are three subgroups according the
position in the model: start event, intermediate, and end event.
The start events are those that trigger (or initiate) a process (e.g.,
when an email from the customer is received). The intermediate
events are those that occur during the process execution (e.g., sud-
den cancellation of an order by the customer during the process).
Finally, the end events are those that end a process (e.g., link to a
another process not modelled here).
In addition, events can be classified between catching or throwing
events depending on if they either receive or send the event infor-
mation. The catching events are white-coloured and the throw-
ing event are black-coloured. For example the catching intermedi-
ate timer event named 24h (it is a black clock inside a double circle
line) located after the task send email means that the process waits
24 hours after the sending email, i.e., the process continues just 24
hours after the sending email.
6. According the poster, a data store is a place where the process can
read or write data, e.g., a database or a filing cabinet. It persists beyond
the lifetime of the process instance. The last characteristics (persis-
tence) differentiates a data store from others data objects. A data
input for example disappears after that the object is consumed.
As the definition says, data stores can be physical objects, not
only a digital elements.
3. What is the role of the terminate end event (not a standard end
event) modelled at the end of the vendor process? What would
occur if this event were a standard end event.
3. The role of the terminate end event modelled at the end of the
vendor process is to kill the complete process. Let us observe that,
after a question made by the customer for her order, the loop
remains forever active. This iteration continues until the terminate
end event is reached.
4. Let us observe that the exclusive gateway that opens both paths is
not closed by another gateway. Is it correct?
2. Both of them catch and handle the occurring events, but only the
non-interrupting type (here it is the escalation event late delivery)
does not abort the activity it is attached to. When the interrupt-
ing event (here it is the error event undelivery) type triggers, the
execution of the current activity stops immediately.
Problem 3.23. Consider the model of Fig. 3.5 and describe in words
the behaviour of the process.
88 practical business process engineering
Solution 3.23 The process for the stock maintenance of Fig. 3.5 is
triggered by a conditional start event. It means that the process is
instantiated in case that the condition became true, so in this example
when the stock level goes below a certain minimum. In order to
increase the stock level an article has to be procured. Therefore, we
use the same Procurement process as in the order fulfilment and
refer to it by the call activity procurement, indicated by the thick
border. Similar to the order fulfilment process this process handles
the error exception by removing the article from the catalogue. But
in this stock maintenance process there appears to be no need for the
handling of a late delivery escalation event. That’s why it is left out
and not handled. If the procurement sub-process finishes normally,
the stock level is above minimum and the Stock Maintenance process
ends with the end event article procured.
2. The first task in this sub-process is the check whether the article
to procured is available at the supplier. If not, this sub-process
will throw the not deliverable error that is caught by both order
modelling in bpmn and dmn 89
2. Do you think that this diagram would be useful for a general high
level description of the process? Would be useful for the technical
requirements of a software support application?
2. Let us observe that that all tasks are manual which means that the
process is completely executed by humans, with no process engine
involved. Could be the process supported by a process engine?
3.9.
1. The process assumes that only the process trouble ticket system
is supported by a technical system. It contains two user tasks, one
send task, two script tasks, one service task. Only the user tasks
are not full automatic tasks (semi-automatic), while the others
are executed by a system and coordinated by a process engine
(including the send task).
This design does demand reasonable intensity of coordination. In
fact, the whole process is executed manually, excepting the trou-
ble ticket system. Only the trouble ticket system does require a
process engine for the coordination and can be considered a situ-
ation introductory of a process engine for the whole collaboration
process.
2. The process starts when a message from the key support manager
is received. It can be assume that the message is an email ad-
dressed to the 1st level support of the trouble ticket system. When
the process engine receives the issue by email, an open ticket is
opened automatically and it shows 1st level support the ticket.
She can edit the ticket and decides if the issue is resolved or not.
If yes, then the process engine sends an email to the key account
manager and the ticket is closed automatically. If the issue is not
resolved, the 2nd level support edit the ticket again after receiving
the ticket from the process engine. The 2nd level support decides
about the resolution of the issue. If she decides as resolved, pro-
cess engine communicates to the 1st level support, sends an email
to the key account manager, and closes the ticket. If not the case,
the process engine communicates to 2nd level support agent and
the system insert issue into product backlog automatically. The
process continues automatically as before.
1. Each year in September, in the year preceding the year the Prize
is awarded, around 3000 invitations or confidential nomination
forms are sent out by the Nobel Committee for Medicine to se-
lected Nominators. The Nominators are given the opportunity to
nominate one or more Nominees. The completed forms must be
made available to the Nobel Committee for Medicine for the se-
lection of the preliminary candidates. The Nobel Committee for
Medicine performs a first screening and selects the preliminary
candidates. Following this selection, the Nobel Committee for
Medicine may request the assistance of experts. If so, it sends the
list with the preliminary candidates to these specially appointed
experts with the request to assess the preliminary candidates’
work. From this, the recommended final candidate laureates and
associated recommended final works are selected and the Nobel
Committee for Medicine writes the reports with recommendations.
The Nobel Committee for Medicine submits the report with rec-
ommendations to the Nobel Assembly. This report contains the
list of final candidates and associated works. The Nobel Assembly
chooses the Nobel Laureates in Medicine and associated through a
majority vote and the names of the Nobel Laureates and associated
works are announced. The Nobel Assembly meets twice for this
selection. In the first meeting of the Nobel Assembly the report is
discussed. In the second meeting the Nobel Laureates in Medicine
and associated works are chosen. The Nobel Prize Award Cere-
mony is held in Stockholm.
2. Yes, absolutely. The process contains five send and receive tasks
that can be automated and executed by a process engine. Others
tasks are user tasks that can be coordinated by the same machine.
1. The task initiate sell order is executed if the stock price drops 15%
below purchase price is true.
3. The process starts with the user task receive confirmation. Parallel
to this task, after two days, the task send reminder is executed
automatically every three days. After receiving confirmation the
process ends. Ensure that the process terminate correctly.
Solution 3.29 The situations are very common. We will show that
they can be modelled with enough precision:
1. The model can be found on Fig. 3.10 above. The stock price drops
15% below purchase price is a signal and not a message. A signal
start event contains information sent without addressee, instead, a
message has and organisation, people, or machine that receives the
information. In this example, we do not have information how is
the initiate sell order task carried out, therefore we modelled it as
sub-process.
2. The model can be found on Fig. 3.10 below. This is a typical ex-
ample of a weakly structured process. They are called ad-hoc
96 practical business process engineering
4. The models are shown on Fig. 3.12. Both are iterations clearly
different. The first one is called while-do iteration, while the second
one repeat-until.
Solution 3.31 The solution can be found on Fig. 3.15. This model
describes a typical situation of an inclusive gateway use. It means
Problem 3.32. Model in BPMN 2.0 the following process: The process
starts when a user executes the task announce issues for discussion,
then she must wait 6 days before the task e-mail discussion deadlines
warning, which is automatically executed. Next, the process ends.
Solution 3.32 The solution is on Fig. 3.16. In this case we are mod-
elled a throwing as an automatic task, which is triggered by the
catching timer intermediate event called 6days. Because the whole
process is driven by a human and we do not have more information,
the start and end events are standard (or plain). Let us note that af-
ter the task announce issues for discussion ends, the process engine
suspends all activity until 6 days, where the following task contin-
ues.
Solution 3.33 The solution is on Fig. 3.17. The terminate end event
is absolutely necessary, because it kills the instances existing, thus
modelling in bpmn and dmn 99
Solution 3.36 The solution is on Fig. 3.20. Let us observe that the
task is of type user, it is triggered somehow by a people and finishes
just after 14 days starting. It does not have a normal end.
Solution 3.39 The solution is on Fig. 3.23. The model contains the
same participants as the last two previous examples. The exclusive
event-based gateway models decision taken by the customer, not
internally by the manufacturer. Note that here the event-based ex-
clusive gateway that opened three threads is closed after the three
activities, because, according the text, the process must finish after
that all activities were executed.
Problem 3.40. Model the following process: Once the message re-
ceive customer flight and hotel room reservation request is received,
the sub-process consists of: (1) execution search flights based on cus-
tomer request, then evaluate flights within customer criteria book
hotel; (2) execution search hotels based on customer request, then
evaluate hotel rooms within customer criteria. Both group of tasks
can be executed in parallel, skipped or only one of them. Let us sup-
pose that this sub-process is executed three times sequentially, add
this characteristic to the model.
modelling in bpmn and dmn 103
Solution 3.41 The solution is on Fig. 3.25. Let us observe the mod-
elling with an exclusive event-based gateway. The paths are con-
trolled by three external events. The task request credit card informa-
tion from customer can be interrupted exactly at 24 hours. We added
104 practical business process engineering
Problem 3.42. Model the following process in BPMN 2.0: The issue
list manager executes review the issue list in the data base every Fri-
day, then the decision determines if there are issues that are ready
must be made before going through the discussion cycle. The deci-
sion consider that if there are no issues ready, then the process is over
for that week – to be taken up again the following week on Friday.
Instead, if there are issues ready, then the process will continue with
the sub-process named discussion cycle. Consider the decision as a
manual task. Could you model the process supported by a process
engine?
Solution 3.42 The solution is on Fig. 3.26. The issue list manager is
modelled by a participant. The process starts every Friday as it is in-
dicated in the text. The issue list manager uses a computer, thus it is
modelled by a user task. After reviewing task, the decision continues,
which is a manual task. This case is represented by the first model of
Fig. 3.26.
Now, let us discuss the case which the process is executed by a
process engine. In this case, the user task review the issue list and
the decision can be put together and supported by a computer as is
shown in the figure of the centre. In an extreme case all tasks can be
automatic and executed by the process engine as shown in the below
figure.
Problem 3.43. Model the following process: The process begins with
a message start events representing the message received by the cus-
tomer. Then, the process engine sends out application form, then it
waits for the application form from the customer. The machine waits
seven days for the application from the customer. If the machine does
not receive an answer from the customer at the seventh day, then the
modelling in bpmn and dmn 105
Problem 3.44. Model the following process. When the soccer team of
the Universidad de Chile wins an official game, the process machine
sends greetings (by SMS) automatically to their fans that are regis-
tered in the data base. In addition to the greetings, a fan receives an
invitation to participate in a bet game related to the next game of the
team. If the process machine does not receive the acceptation or rejec-
106 practical business process engineering
Solution 3.44 The solution is shown on Fig. 3.28. The model needs
an exclusive event-based gateway for the external decision of the fun.
Let us observe, that the process machine does not receive a message
at start, but it receives a signal of every game outcome whether the
team won or lost.
externally by the throwing event called with the same name (after
error by charging). The compensation tasks of booking flight and
hotel are carried out in sequence and automatically. Then a user
updates the customer record. Compensation means to move back, to Compensation means to move back, to go
go at the start for the corresponding task. The name for the tasks at the start for the corresponding task.
The name for the compensation tasks
here is cancel. here are cancel.
The third event sub-process is the booking error 1 which catches
any other error intern or externally to the transaction. In this case,
both tasks of booking are compensated and the the flows leave the
transaction through the boundary event booking error 2. The rest of
the model is trivial.
Consider the decision table of Fig. 3.30. The Table shows a set
of four rules for determining the credit worthiness of a customer
according the income and assets measured in currency units. We will
try to answer the following questions. What is the creditworthiness
of a customer that (all quantities in thousands) has:
1. Only one rule is applicable, then we obtain only one output. This
is the ideal case.
2. Multiple rules are applicable and the rules produce different out-
puts. We need an additional rule for selecting the rule that pro-
duce the wished output.
3. Multiple rules are applicable, but all rules produce the same out-
put. No problem.
The DMN Table of Fig. 3.32 presents the rules for selecting rules
in any DMN Table. For all cases, if only one rule is applicable, then
the hit policy will be UNIQUE. If the number of desirable outputs is
one, then we can always use the hit policy FIRST. If there are tuples
of inputs which corresponds to many and equal outputs, or only one
output, then the hit policy will be ANY. Finally, let us observe that
the hit policy COLLECT can be always be applied, but we obtain a
list of outputs, including the case that only one rule is applicable.
Instead, by applying the UNIQUE, FIRST, or ANY policies, we will
obtain only one output and, but not a list of outputs.
Now, we will try to do an analysis more systematic of the DMN
Table of Fig. 3.30. That is, given a DMN Table we will determine the
cases we obtain: only one output, multiple and different outputs,
multiple and same outputs, and no output. As we said, the last case
is not admissible. The analysis consists on to determine an extensive
decision table, i.e. a table that classifies every case (or customer)
in disjoint sets, thus each tuple of inputs are disjoint. Then, we will
apply the rules of the original DMN Table to each tuple of disjoint
inputs and will obtain the corresponding outputs verifying each case.
Figure 3.33: A Simple DMN
We apply the method in the following problem:
Table. It describes the decision
Problem 3.46. For the DMN Table of Fig. 3.33 determine the exten- about the creditworthiness of a
sive decision table, apply the rules to each tuple of inputs of this customer. It has four rules and
table, and determine an adequate hit policy for the DMN Table, if two inputs. Every DMN Table
any. requires only one value for the
output.
Solution 3.46 For determining an extensive decision table, we need
to consider each input and write the partition of possible values of
it. The DMN Table of Fig. 3.33 has two inputs. The set of values of
the input income is [20, ∞[ and, according the DMN Table, the parti-
tion15 of this set is [20, 30]∪]30, 40]∪]40, ∞[. The set of values of the 15
Remember: a partition of a set is a set
input assets is [0, ∞[ and, according the DMN Table, the partition of sets such that they are disjoint and
the union is the original set.
is [0, 100]∪]100, ∞[. Let us observe that the number of subset of the
partitions of values of income is 3 and the number of subsets of the
partition of the input assets is 2. Then we will have 3 × 2 = 6 com-
binations of tuples that classify all customers. This is shown in the
extensive table on the left side of Fig. 3.34.
Now, we will apply the four original rules of the original DMN
Table of Fig. 3.33 to each exclusive set of values of the extensive ta-
ble. We will consider a representative customer each subset. This is
shown on the right table of the Fig. 3.34. Let us study in detail this
extensive table. We have six representative customer and we will
apply each rule of the extensive table (not of the original DMN Ta-
ble!) to every customer. To the customer 1 is applicable only the rule
1 obtaining the output no. To the customer 2 is applicable rules 1
modelling in bpmn and dmn 111
Problem 3.47. Consider the DMN Table of Fig. 3.35. Obtain the
extensive decision table and determine the best hit policy, if any. Figure 3.35: The Applicant
Risk Rating. It classifies ap-
Solution 3.47 The solution is the table of Fig. 3.36. Let us observe plicant according the age and
that we simulated the table by using a MAC OS Numbers sheet. You medical history.
can use MS Excel without difference. We are showing the formula for
calculation of applying the rule 1 to the customer 5.
Let us assume that, because the logic of the problem, we need to
obtain only one output. Because each input tuple corresponds to only
one output, the best hit policy is UNIQUE.
112 practical business process engineering
Problem 3.48. Consider the DMN Table of Fig. 3.37. Obtain the
extensive decision table and determine the best hit policy, if any.
Problem 3.49. Consider the DMN Table of Fig. 3.39. Obtain the
extensive decision table and determine the best hit policy, if any.
Solution 3.49 The solution is the table of Fig. 3.40. It is reasonable Figure 3.39: The Applicant
to assume that we need to obtain only one out from this decision Risk Rating (2nd Version). It
table. Because there is at least an input tuple (see employee 6) that classifies applicant according
corresponds to many different outputs, the best hit policy is FIRST. applicant age and the medical
history.
modelling in bpmn and dmn 113
Problem 3.50. Consider the DMN Table of Fig. 3.41. Obtain the
extensive decision table and determine the best hit policy, if any.
Problem 3.51. Consider the DMN Table of Fig. 3.43. Obtain the
extensive decision table and determine the best hit policy, if any.
will apply a sum of total days for the holidays. The best hit policy is
COLLECT.
Problem 3.53. Consider the DMN Table of Fig. 3.47. Obtain the
extensive decision table and determine the best hit policy, if any.
Problem 3.54. Consider the following situation and answer the fur-
ther questions.
For determining the final price, it is necessary to know the member
type and if the day of the buying is a black Friday. In fact, if the mem-
ber is gold and the day is a black friday, then the final price is $69.99.
For any another day, the normal price is $99.99, excepting when the
member is silver, which case the price is $89.99.
modelling in bpmn and dmn 115
Problem 3.55. Consider the following situation and answer the fur-
ther questions.
The decision of a pre-qualification of creditworthiness depends on
three criteria: the credit score rating, which can take the values: poor,
(a) Modelling the corresponding DMN Table and determine the best
hit policy, if any. (b) Implement the DMN Table in Signavio Academic
and verify that the selected hit policy is correct if any.
Solution 3.55 (a) The DMN Table is shown in Fig. 3.51. While the
extensive table with the corresponding simulation is shown in Fig.
3.52. Let us observe that there are cases where there are more of
an output, but they are the same. Therefore, the hit policy ANY is
applicable. (b) The implementation in Signavio Academic is not shown
in this book. However, we must consider a DMN diagram which the
output of Table 3.1 is input for the Table of Fig. 3.51.
Problem 3.56. Consider the following situation and answer the fur-
ther questions.
modelling in bpmn and dmn 117
The workers and the company agreed at least 20 vacations days yearly
plus additional days according to the worker age and years of service.
Each worker has right to additional days according to the following
rules: (a) if the age is less than 25 or greater or equal than 60, then
he will have 5 days. (b) If the years of service is greater or equal than
30, he will receive 5 days too. (c) The most common case is when the
worker is between 18 and 60 years old (inclusive both quantities) and
the years of service between 15 and 30 (also inclusive), the additional
days are 2. (d) The last case is when the age is between 45 and 60
(inclusive) and the years of service is less than 30, then the worker will
have 2 days.
(a) Modelling the corresponding DMN Table and determine the best
hit policy, if any. (b) Implement the DMN Table in Signavio Academic
and verify that the selected hit policy is correct if any.
Solution 3.56 (a) The DMN Table is shown in Fig. 3.53, while the ex-
tensive table with the corresponding simulation is shown in Fig. 3.54.
Let us observe that there are cases where there are more of an output,
which must be added to the total days of vacations. Therefore, the
hit policy COLLECT is applicable, because we need to sum the total
days of vacations. (b) The implementation in Signavio Academic is not
shown in this book. However, we must consider a DMN diagram
which the output of Table 3.1 is input for the Table of Fig. 3.53.
118 practical business process engineering
n n
λ λ λ
Pn = P0 = 1− .
µ µ µ
The cycle time, i.e. the average time a customer spends in the total cycle time
queuing system (either in the queue or being served) is
1
W= .
µ−λ
analysis and optimisation 125
The queue time, i.e. the average time a customer spends waiting in queue time
queue to be served is
λ
Wq = .
µ(µ − λ)
The system length, i.e. the average number of customers in the system length
waiting line is given by the Little Law Little Law
L = λW.
Observe that
1 λ
L = λW = λ = .
µ−λ µ−λ
The queue length, i.e. the average number of customer in the queue queue length
is given also by the Little Law
Lq = λWq .
Observe that
λ λ2
Lq = λWq = λ = .
µ−λ µ−λ
The utilisation rate of the server, i.e. the probability that the utilisation rate
server is busy (i.e., the probability that a customer has to wait) is
λ
U= .
µ
The idle rate of the server, i.e. the probability that the server is idle idle rate
(i.e., the probability that a customer can be served) is
λ
I = 1−U = 1− .
µ
Let us observe that the last term 1 − λ/µ is also equal to P0 , that is
the probability of no customers in the queuing system is the same as
the probability that the server is idle.
We will continue with exercises and problems. You can find the
original models in BPMN 2.0 and the corresponding models with
data for simulation, i.e. with the resources and times in the following
link: models. Figure 4.2: Simple Queue Sys-
Problem 4.1. Let us suppose a bank office with a unique cashier tem. The system satisfies the
and queue. Between 12am and 1pm customers arrive every 2.5 min- assumptions of a simple queue
utes on average. The cashier is able to process every customer in system: FIFO customer disci-
2 minutes on average. The number of arrivals per time is Poisson pline, infinite customers. Time
distributed and the time of serving each customer is exponential dis- time between arrivals and the
tributed. Answer: (a) Characterise the queue system. (b) Calculate processing time are exponential
P0 , W, Wq , L, Lq , U, and I by using formulas provided by the Queue distributed.
Theory. (c) Consider the model BPMN of Fig. 4.2 representing the
simple queue system. Calculate P0 , W, Wq , L, Lq , U, and I by using a
simulator and compare the results with those obtained in (b).
126 practical business process engineering
Solution 4.1 (a) The bank office has a unique cashier and queue.
The queue discipline is FIFO (first-in-first-out), it is supposed that
the queue receives infinite customers, the time between arrivals and
the processing time are distributed exponential (remember that this
equivalent to the number of arrivals and the number of processed
customers are Poisson distributed). This characterisation corresponds
to a simple queue.
. .
(b) Given is λ = 1/2.5 [min−1 ] and µ = 1/2 [min−1 ]. It is first
important to verify that λ < µ, what is true. With this information,
we can calculate P0 = (1 − 2.0/2.5) = 0.2, W = 1/(1/2.0 − 1/2.5) =
10.0 [min], Wq = (1/2.5)/((1/2.0) · (1/2.0 − 1/2.5)) = 8.0 [min], L =
(1/2.5) · 10.0 = 4 [-], Lq = (1/2.5) · 8.0 = 3.2, U = (1/2.5)/(1/2.0) =
0.8, and P0 = I = 1 − 0.8 = 0.2. Let us observe that, because the
service time is 2.0, then W = Wq + 2.0 = 8.0 + 2.0 = 10.0, i.e. the
cycle time is equal to the queue time plus the processing time for
each customer.
(c) The simulation was carried out with the software BIMP (see the
Web Page https://bimp.cs.ut.ee/). The results are: W = 9.6 [min],
Wq = 7.6 [min], and U = 0.79. L and Lq are not given by the software,
then we will use the Little Law: L = λW = (1/2.5) · 9.6 = 3.8 [-],
Lq = λWq = (1/2.5) · 7.6 = 3.0, and P0 = I = 1 − 0.79 = 0.21. Let
us note that these results are not exactly equal to those obtained from
the Queue Theory of part (b), but they are similar. The differences
depend on the number of runs and the construction of the simulator
software. However, they are good enough for our purposes.
1 1 4.0 .
< nA → nA > = 1.6, then we take the value n A = 2.
2.5 4.0 2.5
.
The same procedure is carried out for the rol B, i.e. n B = 2.
The simulation was carried out with the software BIMP (see the
Web Page https://bimp.cs.ut.ee/). The results are the following:
W = 20.9 [min]; Wq is obtained by summing the queue times each
task, i.e. Wq = 6.6 + 6.4 = 13.0 [min]. L and Lq are not directly
obtained from the simulator, therefore they must be calculated by
using the Little Law, i.e.
1 1
L = λW = · 20.9 = 8.4 [-] and Lq = λWq = · 13.0 = 5.2 [-].
2.5 2.0
The utilisation rates are U A = 0.79, UB = 0.79 for roles A and B
respectively. I A and IB are easily computed by I A = 1 − U A and
IB = 1 − UB respectively.
Solution 4.3 We follow the same argument as the above problem, i.e.
each task can be assumed approximately as a simple queue system
and we need to compute the minimal resources n satisfying λ <
nµ. Thus, for the rol AB in the Fig. 4.4, the minimal number n AB of
resources will be
1 1 8.0 .
< n AB → n AB > = 3.2, we take the value n AB = 4.
2.5 4.0 + 4.0 2.5
The results are: W = 12.5 [min]. The queue time for the task A is
2.3 [min] and for the task B 2.3 [min], then the queue time for the
process is Wq = 2.3 + 2.3 = 4.6 [min]. According the Little Law
1 1
L = λW = · 12.5 = 5.0 [-] and Lq = λWq = · 4.6 = 1.8 [-].
2.5 2.5
The utilisation rate is U AB = 0.79 for rol AB. I AB is easily computed
by I AB = 1 − U AB .
egy is obviously better that the base case, but rarely possible. Indeed,
it is possible to implement it if no output of the previous task is an
input for the next task. Model the the process of Fig. 4.5 in BPMN
and simulate it. Consider the time random variables as exponentially
distributed, FIFO queue discipline, and infinite customers. By simu-
lating the model obtain W, Wq , L, Lq for the complete process, and U
and I for each resource.
1 1
L = λW = · 14.7 = 5.9 [-] and Lq = λWq = · 6.6 = 2.6 [-].
2.5 2.5
The utilisation rates are U A = 0.79, UB = 0.80 for roles A and B
respectively. I A and IB are easily computed by I A = 1 − U A and
IB = 1 − UB respectively.
this example, the sum of the service times of the tasks to compose is
4.0 + 4.0 = 8.0, we will consider a new task, called AB, with service
time 7.0 less that 8.0. The service time of the composed task depends
of the real context and there are not a general rule for determining it.
Model the the process of Fig. 4.6 in BPMN and simulate it. Consider
the time random variables as exponentially distributed, FIFO queue
discipline, and infinite customers. By simulating the model obtain
W, Wq , L, Lq for the complete process, and U and I for the resource
AB.
1 1
L = λW = · 30.4 = 12.1 [-] and Lq = λWq = · 23.5 = 9.4. [-].
2.5 2.5
The utilisation rate is U AB = 0.92. The idle rate is I = 1 − 0.92 =
0.08.
Problem 4.6. In this problem we will evaluate the triage strategy, triage strategy
which consists in to assign special resources to particular cases. The
assignation is not flexible. Triage is a standard strategy used in the
emergency rooms for prioritising patients according their disease
severity. In this case, let us suppose that the task A can be divided
analysis and optimisation 131
nr strategy resource ni Ui
Table 4.1: Resource Table. It
1 base case rol A rol B 2 2 0.79 0.79 shows the results associated to
2 flexibilisation rol AB 4 0.79
each resource. ni is the number
3 parallelisation rol A rol B 2 2 0.79 0.80
of resources for the resource i
4 composition rol AB 3 0.94
and Ui is the utilisation rate for
5 triage rol AS rol AC rol B 1 1 2 0.79 0.80 0.79
the resource i.
On the other hand, the table 4.2 shows the values associated with
each task. In this case, the tasks are grouped into paths. The base path
case has only one path from start to finish: the path A − B. The
model of the flexibilisation strategy has the same path A − B. The
parallelisation strategy has two possible paths A − B or B − A de-
pending on which task is performed first, however for the purpose
of calculating the cycle time of the whole process, the order does
not matter. The composition strategy has a single task AB, while the
triage strategy has two exclusive paths: AS − B or AC − B.
nr strategy n W [min] U
Table 4.3: Summary of Results.
1 base case 4 21.9 0.79 It resumes the results of n, W
2 flexibilisation 4 15.0 0.79 and U for the models of tables
3 parallelisation 4 16.3 0.80 4.1 and 4.2. The values Wq , Lq
4 composition 3 31.1 0.94 can be equally obtained, but
5 triage 3 30.2 0.79 they do not mean additional
concepts to our objectives.
Solution 4.7 The U utilisation rate for the entire process is obtained
by weighting the utilisation rates associated with each resource Ui by
the respective number of resources ni . The result is obtained in the
table 4.3. We have repeated the table 4.1 for the purpose of further
explanation.
Instead the cycle time W of the entire process is a little more dif-
ficult to obtain, because it depends on the path and its frequency. In
the case of the base case, flexibilisation and composition there is only
one path with frequency 1.0. In the model of the parallelisation strat-
egy, the cycle time of the whole process will be max{16.3, 10.3} =
16.3 [min]. The triage strategy has two paths: the first path occurs
with frequency 0.75 and has a cycle time of 14.1 + 11.1 = 25.2 [min],
while the second path occurs with frequency 0.25 and has a cycle
time of 34.2 + 11.1 = 45.3 [min]. This yields a weighted value of
W = 0.75 · 25.2 + 0.25 · 45.3 = 40.3 [min].
Problem 4.8. The Table 4.3 resumes the results of n, W and U for
the models of tables 4.1 and 4.2. The values Wq , Lq can be equally
obtained, but they do not mean additional concepts to our objectives.
Determine the Pareto optima solutions.
(1) First eliminate the worst alternatives, i.e. the alternative that is
the worst in the three criteria (n, W and U). (2) Compare pairwise the
first alternative with the rest. (3) Continue with the step 1.
We will apply the procedure to the Table 4.3 and show it in Ta-
ble 4.4. (1) First eliminate the worst alternatives, i.e. the alternative
that is the worst in the three criteria (n, W and U). In this case, as we
said, no alternative is the worst simultaneously in the three criteria.
(2) Compare the first alternative pairwise with the rest, i.e. compare
the alternative 1 pairwise with 2, 3, 4, and 5. Comparing 1 and 2, we
can eliminate the alternative 1, because both have the same n and
U, but the alternative 1 has worse W. (3) Continue with the step 1.
After eliminating the alternative 1, we eliminate that alternative with
minimum in all criteria (for alternatives 2, 3, 4, and 5), but no alter-
native satisfies this condition. (4) We compare now the alternative 2
pairwise with 3, 4, and 5. Comparing 2 and 3, we can eliminate 3 (we
say that 3 is dominated by 2). Alternatives 2 and 4 are not dominated
between them. In addition, 2 and 5 are not dominated between them.
We will continue with alternatives 4 and 5. (5) Continue with the step
1 of the general procedure by eliminating the alternative 4.
Therefore, the Pareto optima solution are the alternatives 2 and
5.
Problem 4.9. The Table 4.5 resumes the results of n, W and U for 6
alternative models. Determine the Pareto optima solutions.
nr n W [min] U
Table 4.5: Summary of Results.
1 8 10.5 0.82 It resumes the results of n, W
2 10 10.5 0.75 and U for alternative models.
3 7 15.5 0.60
4 4 12.3 0.80
5 8 8.3 0.67
6 10 14.2 0.90
Solution 4.9 We will follow the procedure for finding the Pareto
optima solutions of Table 4.5 and write the analysis in Table 4.6. (1)
First, examine the entire list and eliminate the worst alternative. In
this case the alternative 6 is eliminated. (2) We start with pairwise
comparison from alternative 1: 1 is dominated by 5, then 1 is elimi-
nated. (3) There is no worst alternative between 2, 3, 4 and 5. (3) We
start with pairwise comparison from alternative 2: 2 is dominated by
5, then 2 is eliminated. (4) There is no worst alternative between 3,
4 and 5. (3) The pairwise comparison for 3 (comparing 3 with 4 and
3 with 5) does not give a winner. (4) The pairwise comparison for 4
and 5 does not give a winner.
analysis and optimisation 135
nr n W [min] U dom. by
Table 4.6: Computation Pareto
1 8 10.5 0.82 5 Optima. It contains the elimi-
2 10 10.5 0.75 5 nated alternatives and shows
3 7 15.5 0.60 the Pareto optima solutions:
4 4 12.3 0.80 alternatives 3, 4, and 5.
5 8 8.3 0.67
6 10 14.2 0.90 min.
task Wi
Table 4.7: Task Table obtained
formal verification of invoice 1.2 by simulating the model of Fig.
trigger payment 2.2 4.8.
invoice registration 2.7
release invoice 2.9
.
take n2 = 2. Thus, the number of resources for resource accounting
.
will be n acc = n1 + n2 = 1 + 2 = 3. Now, we will continue with the
resource sales. For n3 assigned to invoice registration, 12 < n3 12 , we
. .
take n3 = 2. Finally for the director, 0.7 21 < n4 21 , we take n7 = 1.
In summary, the minimal number of resources will be n acc =
3, nsal = 2, and ndir = 1.
(b) The simulation yields the following utilisation rates: Uacc =
0.50, Usal = 0.50 and Udir = 0.30. Thus, the average utilisation rate for
the entire process will be: U = (3 · 0.50 + 2 · 0.50 + 1 · 0.30)/(3 + 2 +
1) = 0.47.
For calculating the W for the entire process, we need to sum-
marise the results in the task table. This is shown in Table 4.7. We
will calculate the times each path for the process. We have two
possible paths. The first one is: formal verification of invoice - in-
voice registration - trigger payment, and the second one is: for-
mal verification of invoice - invoice registration - release invoice
- authorisation received - trigger payment. The frequencies each
path are 0.7 and 0.3 respectively. Each path has the following cy-
cle time. The first path: 1.2 + 2.7 + 2.2 = 6.1 [hr], and the second
1.2 + 2.7 + 2.9 + 6.0 + 2.2 = 15.0 [hr], where we added the time of the
event authorisation received with a fixed time of 6 [hr]. Therefore, the
cycle time for the entire process will be: W = 0.7 · 6.1 + 0.3 · 15.0 = 8.8
[hr].
task 1 2 3 4 5 6 7
Table 4.8: Task Table obtained
Wi 9.6 11.7 8.7 6.6 6.6 2.6 29.9 by simulating the model of Fig.
4.10.
or other theories that support it. We opted for the so-called Contin-
gency Theory approach as proposed by H. Mintzberg 1 . According to 1
Henry Mintzberg. Structure in
this theory, the organisation adapts its organisational parameters to fives: Designing effective organizations.
Prentice-Hall, Inc, 1993
certain environments in the search to minimise coordination costs.
The theory identifies five coordination mechanisms (or archetypes)
that classify each organisation into one of them. Each archetype is
140 practical business process engineering
associated decision often does not bring any benefit. Moreover, most
of the time, standardising the information is a preliminary step to
improving the decision. Standardisation of information can also be
useful if it is then managed in a computer system such as a Database
Management System. Consequently, this can accelerate information
processing as well as transparency. Some typical improvements in
information and decision standardisation are as follows:
Problem 4.13. After an interview with the issue list manager, the
initial model of the process is obtained, as shown in the Fig. 4.11. Initial model
This consists of the issue list manager reviews the issue list. Follow-
ing this task, the same issue list manager identifies which issues are
ready to start a discussion cycle and authorises those that are ready.
This initial model is the starting point of the improvement process.
Once the initial model is obtained, a more precise investigation de-
termines how the process is executed, how the tasks are triggered,
the processing times and eventually the costs. We call this new model
as-is model and it is shown in the Fig. 4.12. Note that this model is As-is model
triggered every Friday, it is executed by a user who takes an interme-
diate time of 6 hours between the two tasks. Answer the following
questions: (a) Determine an improved to-be model and (b) calculate
the differences in cycle time from the as-is model.
144 practical business process engineering
Solution 4.13 This example is very simple, but it shows how to use
the SAC improvement method. The example shows how to design
processes through marginal improvements ensuring risk control of
the implementation of changes by using the SAC method.
(a) The first phase of the method consists of Information and Deci-
sion Standardisation, phase (S). In practice, this is done by collecting
the manual or system documentation that supports the process. The
collected data are specified in a model. The most suitable technical
model for this is the Entity Relationship Model (ERM), however the
ERM standard has modelling requirements that we will avoid this
time. Therefore, we will simply group the data and information into
a tree model that will serve as the basis for the ERM model. This
model is shown in Fig. 4.13. A second sub-step of phase (S) is re-
lated to the improvement of the decisions involved. In this case, the
decision is to determine which issues are ready for the discussion
cycle. This can be a complex task and the decision rule can involve
professional work. We will assume that, to start the cycle of discus-
sion of an issue, two conditions are required: that it has a creation
date greater than 15 days and that it has a number of participants
greater than 5. Since it is a fairly simple decision rule, we will assume
such a rule will be embedded in the to-be model’s issue list review
task. We propose that the determination of those issues ready for
discussion will be supported by the issue management system and
will be done in a composed task as shown in Fig. 4.14. Note that the
fraction of issues that initiate discussion (80%) remains the same as
in the as-is model, since the decision rule remains unchanged from
the as-is model. Additionally, in this case we propose an alternative
improvement of phase (A). We propose to fully automate the review
issue list task and determine if issues ready, since it has already been
sufficiently standardised in a previous automation phase. This so-
lution is shown in the to-be model in Fig. 4.15 and shows how we
replace the task with a fully automatic service that also executes the
issue selection rule ready for the discussion cycle.
(b) The as-is model of the Fig. 4.12 has a cycle time of 7 hours.
analysis and optimisation 145
This occurs due to the interruption of the semi-manual task. The to-
be model of the Fig. 4.14 has a cycle time of 30 min which is achieved
by standardising the information and composing both tasks into a
single task. Finally, the to-be alternative model of the Fig. 4.15 is
reduced to 0 min by fully automating the process and the simple
decision rule embedded in the automatic task. In short, there is a
reduction from 7 hours to 0.
Solution 4.14 (a) As before, we will follow the SAC method. The
data is specified in a technical data model that serves for further
construction of the ERM model. This data model is shown in Fig.
4.18. The second part of phase (S) tries to improve the decisions
involved. In this example, the decision is to classify the invoice to
decide whether the payment requires special approval or not. The
decision involved in this case is more complex than in the previous
example, although not enough to require the work of a professional
employee. The decision model is shown in Fig. 4.19. As you can see
in the figure, this rule considers the product type, customer type, and
amount to classify the invoice. Thus we end phase (S) of the method
and continue phase (A). In phase (A), we automate the decision rule
and the release of the invoice. We additionally model an intermediate
sending message request authorisation. The next task (trigger pay-
analysis and optimisation 147
Problem 4.15. The initial model has been obtained through personal
interviews and is shown in Fig. 4.21. In this model, the client places
an order, then a clerk enters the client’s requirement and analyses
148 practical business process engineering
the feasibility of the request. The analysis can have three alternative
results. One of them must perform an additional task of trying to
clarify the request. Finally the process has two possible outcomes:
decline the request or make a tentative offer. As in the previous
examples, once the initial model is obtained, a more precise inves-
tigation determines how the process is executed, how the tasks are
triggered, the processing times and eventually the costs. This results
in the as-is model shown in Fig. 4.22. In this as-is model we observe
that all tasks are manual (or essentially manual if tools such as Excel
or a local database are used). As each task is performed manually,
additional times are added represented by intermediate events that
occur. Such events represent the fact that the staff member does not
work on each task continuously and lets a day go by for the next ac-
tivity. That is why all the intermediate events last 24 hours. Addition-
ally, the actions of the customer have not been modelled, although
it has been specified that there is an interaction with the client. On
Solution 4.15 (a) We will follow the SAC method. As before, the
data is specified in a technical data model that serves for further
construction of the ERM model. This model is shown in Fig. 4.23. In
phase (S) we propose an improvement of the decision rules involved.
The decision is to analyse the feasibility of the request. As before,
the decision is relatively complex, although not complex enough
to require professional work. The decision model is shown in Fig.
4.24. As you can see from the figure, this rule considers the total
amount, product availability, and quantity to determine the feasibility
of the request. Thus we end phase (S) of the method and continue
phase (A). In phase (A), we automate the decision rule and set a user
task that will clarify the order, in the event that it is judged by an
clerk, who in turn will make the decision if the request is possible or
impossible. We additionally modelled a 24-hour intermediate timer
that represents the time it takes the user to start the next task. The
resulting to-be model is shown in Fig. 4.25.
(b) We then calculate the performance in terms of cycle times of
the as-is model and the respective to-be model. The Table 4.12 shows
the average performance of the as-is process. The average lengths
of all paths are determined and weighted accordingly. The model
has four alternative paths with a weighted average of 3d 3h 10’. On
the other hand, the standardisation and automation of the process
150 practical business process engineering
5.1 Introduction
6.1 Introduction
Process Perspective
Los siguientes ejercicios de la perspectiva de procesos tienen como
objetivo ayudar a comprender cómo trabaja el algoritmo α. Este
algoritmo ha sido ampliado para resolver varias situaciones del
mundo real en cuyos casos no trabaja adecuadamente, a saber: ac-
tividades invisibles, actividades duplicadas, constructos non-free-
choice, short-loops, y finalmente ruido, excepciones e incompletitud.
Aquí mostramos el algoritmo α básico y algunos ejemplos de estos
problemas.
Case ID 1 2 3 3 1 1 2 4 2 2 5 4 1 3 3 4 5 5 4
Tarea A A A B B C C A B D A C D C D B E D D
A C D
Figure 6.1: Modelo descubierto
a partir del event log de la tabla
E
6.1
Case ID 1 2 3 3 1 1 2 4 2 1 2 3 4 3 5 4 5 4
Tarea A A A B B C C A B D D C C D E B F D
Solution 6.4 Una de las suposiciones básicas del process mining (no
sólo del algoritmo α) es que cada actividad ejecutada está registrada
E F
166 practical business process engineering
B G
A D H
Figure 6.3: Modelo descubierto
a partir del event log de la tabla
C E F
6.4
A C D
Figure 6.4: Modelo con activi-
dades duplicadas
B
B E
B Y E
B D
Organisational Perspective
El objetivo de process mining desde la perspectiva organizacional es
descubrir las relaciones, jerarquías y centralidades entre originadores
o grupos de ellos. Estas relaciones, jerarquías o centralidades pueden
medirse de acuerdo a varias métricas. Los problemas que se plantean
a continuación no abordan todas estas métricas ni todas las rela-
ciones, pero sí las más relevantes. Una comprensión adecuada de
estos problemas permitirá comprender y abordar otros casos más
process mining 169
A B1 B2 C
Table 6.6: Matriz originator ×
Camila 4 0 0 4 task de la tabla 6.5
Ernesto 0 4 0 0
Carlos 2 0 0 2
Emilia 0 0 2 0
John 6 0 6 0 0 0 0 0
Sue 0 0 0 2 0 0 1 3
Mike 0 2 0 0 1 0 0 0
Pete 0 1 0 0 1 0 0 0
Jane 0 0 0 0 0 2 0 0
Clare 0 0 0 2 0 0 1 3
Fred 0 0 0 0 1 0 0 0
Table 6.8: Matriz originator ×
Robert 0 2 0 0 1 0 0 0
task de la tabla 6.7
Mona 0 1 0 0 0 2 0 0
Solution 6.11 La tabla 6.8 obtenida anteriormente nos sirve para cal-
cular distancias entre los originadores (distancias entre vectores que
corresponden a las filas). Esta distancia se puede interpretar cuán
cerca están los originadores de actividades similares (joint activities).
La métrica más usada para medir esta distancia es el Pearson’s corre-
lation coefficient, cuyo método de cálculo se explica en el apéndice 6.3.
A pesar de la simplicidad del cálculo, la cantidad de operaciones ha-
cen algo tediosa la solución, por ello usamos una planilla electrónica
que mostramos en la figura 6.8 (bajarla desde el siguiente link link a
planilla).
A partir del cálculo de estos coeficientes, es posible obtener la red
de relaciones entre los originadores que se muestra en la figura 6.9.
Los arcos considerados entre cualquier par de nodos representan
distancias mayores que 0.
1 2 3 4
Table 6.9: Caminos más cortos
1 – 1−2 1−3 1−4 entre todos los pares de nodos
2 – – – – para calcular c B (3). Se han
3 – 3−2 – – eliminado, para el cálculo, la
4 – 4−3−2 4−3 – fila y la columna 3 y la diagonal
Calculamos el betweenness centrality del originador 1, es decir c B (3)
que está dado por
1
0 + 0 + 0 + 0 + 0 + 1 = 1,
c B (3) =
6 |{z}
1 1
|{z} 0
|{z} 0
|{z} 0
|{z} 1
|{z} 6
1−2 1−4 2−1 2−4 4−1 4−2
172 practical business process engineering
Solution 6.14 Como en el ejercicio anterior, para calcular la métrica Table 6.10: Caminos más cortos
de betweenness de cada nodo de la red social de la figura 6.12, primero entre todos los pares de nodos
se debe construir una tabla que muestre todos los caminos más cortos para calcular c B (3). Se han
entre cualquier par de nodos. Luego usamos la fórmula de la sección eliminado, para el cálculo, la
6.3 para calcular el indicador. Observemos que la red es dirigida y fila y la columna 1 y la diagonal
no existen arcos entre un nodo y sí mismo. Calculamos el betweenness
1 2 3 4 5
1 – 1−2 1−4−3 1−4 1−4−5
2 2 − 4 − 3 − 1,2 − 4 − 5 − 1 – 2−4−3 2−4 2−4−5
3 3−1 3−1−2 – 3−1−4 3−1−4−5
4 4 − 3 − 1,4 − 5 − 1 4 − 3 − 1 − 2,4 − 5 − 1 − 2 4−3 – 4−5
5 5−1 5−1−2 5−1−4−3 5−1−4 –
. n ∑ xi yi − ∑ xi ∑ yi
ρ( x, y) = q q ,
n ∑ xi2 − (∑ xi )2 n ∑ y2i − (∑ yi )2
Betweenness Centrality
En la teoría de grafos y análisis de redes hay varias medidas de la
centralidad de un vértice dentro de un grafo que determina la im-
portancia relativa de un vértice dentro del mismo (por ejemplo, cuán
importante es una persona en una red social o, en una red urbana,
cuán bien usado es un camino, etc.). Hay varias medidas de central-
idad que se usan ampliamente en el análisis de una red: degree cen-
trality, betweennness, closeness y eigenvector centrality. No explicaremos
aquí cada una de las medidas, sino que mostraremos cómo calcular
la betweenness centrality.
174 practical business process engineering
. σst (v)
c B (v) = ∑ σst
,
s 6= v 6= t,
s 6= t
Pete
Fred Robert
G1
Figure 6.10: Jerarquía obtenida
según el criterio joint activities
medido con la métrica Pearson’s
John G2
correlation coefficient
G3 G4
Clare Sue G5 G6
Jane Mona G7 G8
2 3
Figure 6.11: Red simple de
relaciones entre 4 originadores
1 4
process mining 177
2
Figure 6.12: Red simple de
relaciones entre 5 originadores
1 4
5
Bibliography