Miswefwerszfw Fbahul
Miswefwerszfw Fbahul
Miswefwerszfw Fbahul
MATHEMATICS
-
Session
2022-2024
Submitted by
1
To Whom It May Concern
2
Abstract
The main goal of this project is to explore the use of stochastic simulation,
genetic algorithms, fuzzy decision making and other tools for solving complex
maintenance planning optimization problems. We use two di↵erent maintenance
activities, corrective maintenance and preventive maintenance. Since the
evaluation of specific candidate maintenance policies can take a long time to
execute and the problem of finding the optimal policy is both nonlinear and non-
convex, we propose the use of genetic algorithms (GA) for the optimization. The
main task of the GA is to find the optimal maintenance policy, which involves:
(1) the probability of breakdown, (2) calculation of the cost of corrective
maintenance, (3) calculation of the cost of preventive maintenance and (4)
calculation of ROI (Return On Investment). Another goal of this project is to
create a decision making model for multicriteria systems. To find a near-optimal
maintenance policy, we need to have an overview over the health status of the
system components. To model the health of a component we should find all the
operational criteria that a↵ect it. We also need to analyze alternative maintenance
activities in order to make the best maintenance decisions. In order to do that, the
TOPSIS method and fuzzy decision making has been used.
To evaluate the proposed methodology, internal combustion engine cooling of a
typical Scania truck was used as a case study.
3
Contents
List of Tables 7
List of Figures 8
1 Introduction 10
1.1 Preventive Maintenance . . . . . . . . . . . . . . . . . . . . . 11
1.1.1 Application and Advantage . . . . . . . . . . . . . . . 11
1.2 Corrective Maintenance . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Advantage and Disadvantage . . . . . . . . . . . . . . 12
1.3 Return on Investment . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Life Cycle Cost . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Total Cost Calculation . . . . . . . . . . . . . . . . . . 14
1.4 Replacement Strategies . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Component health 19
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Definitions and Functions . . . . . . . . . . . . . . . . . . . . 19
2.3 Cooling System . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 The Consequences of PM on the Component Health . . . . . . 25
2.5 Dynamic Reliability . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.1 Riccati di↵erential equation . . . . . . . . . . . . . . . 25
2.5.2 Dynamic Reliability Equation . . . . . . . . . . . . . . 27
2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Multi Criteria Fuzzy Decision Making (MCFD) 30
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Fuzzy Set and Membership Function . . . . . . . . . . . . . . 31
3.3 IFS Generalize Fuzzy Sets . . . . . . . . . . . . . . . . . . . . 31
5
3.4 Fuzzy Implication Operators . . . . . . . . . . . . . . . . . . . 32
3.5 Inclusion Degree Function of IFS ................ 32
3.6 TOPSIS Method . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.1 The Structure of TOPSIS Method . . . . . . . . . . . . 33
CONTENTS
6
6.5 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . 55
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 Summary, Conclusions and Future Work 61
7.1 Summary ............................. 61
7.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8 Summary of reflection of objectives in the thesis 63
8.1 Objective 1 - Knowledge and Understanding . . . . . . . . . . 63
8.2 Objective 2 -Methodological Knowledge . . . . . . . . . . . . . 64
8.3 Objective 3 - Critically and Systematically Integrate Knowledge 64
8.4 Objective 4 - Independently and Creatively Identify and Carry
out Advanced Tasks ....................... 64
8.5 Objective 5 - Present and Discuss Conclusions and Knowledge 65
8.6 Objective 7 - Scientific, Social and Ethical Aspects . . . . . . 65
A Java code - Event Triggers 66
CONTENTS
List of Tables
1.1 Variable descriptions . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Idler Roller parts description . . . . . . . . . . . . . . . . . . . 23
2.2 The values in this table has been hidden by the request from
Scania . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Binary implication ........................ 32
3.2 Decision Making by TOPSIS . . . . . . . . . . . . . . . . . . . 35
7
3.3 The Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 The alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5 The inclusion degrees of and inclusion degrees of A1 in
M1,M2 . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6 The inclusion degrees of and inclusion degrees of
List of Figures
1.1 Corrective maintenance function. ................ 12
1.2 Costs illustration [6] ....................... 15
1.3 An illustration of component age as a function of time with
periodic preventive replacement . . . . . . . . . . . . . . . . . 15
2.1 The bathtub curve hazard function [26] . . . . . . . . . . . . . 21
2.2 Cooling system parts for chassis types P-, G-, R-, T-series . . 22
2.3 Idler roller [28] .......................... 23
8
Introduction
One of the most important factors in the operations of many corporations today is
to maximize profit and one important tool to that e↵ect is the optimization of
maintenance activities. The main goals of all maintenance activities are: 1- wait
as long as possible to perform maintenance so that the amount of useful lifetime
of parts that is thrown away is minimized while avoiding failure, 2- find the
optimal maintenance policy consisting of scheduled and unscheduled
maintenance so that the life cycle cost is minimized while satisfying system
availability and safety requirements [1].
Maintenance activities is at the largest level divided into two major areas,
corrective maintenance activities (CM) and preventive maintenance activities
(PM). In this work, we define inspection and imperfect maintenance1, which can
be used as a maintenance activity in some situations. Each maintenance policy
contains various activities such as ‘replacement’, ‘minimal’ or ‘main repair’, etc.
1 is a standard maintenance which reduces failure intensity but does not leave the system
as good as new [45]
9
CHAPTER 1. INTRODUCTION
Reducing the total cost of ownership for vehicles, maintenance cost analysis and
the resulting decisions becomes the subjects of scrutiny today [1]. A Maintenance
Decision Support System (MDSS) must satisfy an extensive cost benefit analysis.
With accurate information about vehicle health state and failure prediction
capabilities we can make a more informed decision process.
In this chapter we define some engineering and economic concepts and also
analyze various strategies.
10
CHAPTER 1. INTRODUCTION
• Increasing customer service because maintenance teams have less un-
planned maintenance and can respond quicker to new problems [2]
11
CHAPTER 1. INTRODUCTION
1.2.1 Advantage and Disadvantage
Definition 3. Return on investment denotes the benefit derived from having spent
money for a system (or product) development, change and management. ROI is
also a performance measure used to evaluate the e ciency of an investment
opportunity[1].
Return - Investment
ROI = (1.1)
Investment
ROIM = C H CHM
(1.2)
IHM
where
12
CHAPTER 1. INTRODUCTION
• CH is the life-cycle cost of the system when managed using unscheduled
maintenance
• CHM is the life-cycle cost of the system when managed using a health
management (HM) approach
• IHM is the investment in HM
In this work CH is to the total life cycle costs of a system using corrective
maintenance and CHM is the total life cycle costs with preventive maintenance.
Our goal is to maximize ROI. To obtain this we want to make CHM as small as
possible, of course CH is fixed for a particular system.
where
• Cini : initial cost (purchase price of the component and all items unique to
that component)
• Cins : installation cost (shipping cost, rigging, installation and start up of
the component)
• Ce : energy cost (predicted energy cost for system operation)
13
CHAPTER 1. INTRODUCTION
• Cmr : maintenance and repair cost (include both routine and predicted
repairs)
• Csd : downtime cost (loss of production cost)
where K is the replacement cost, C(t) is the maintenance cost and R(t) is rescue
cost [7].
14
CHAPTER 1. INTRODUCTION
15
CHAPTER 1. INTRODUCTION
Figure 1.3: An illustration of component age as a function of time with periodic
preventive replacement
In Figure 1.3, the component gets older with time, the component age is reset upon
each preventive replacement.
, (1.5)
where
Cf < Ck
Although Eq. 1.5 provides a simple solution for optimization of fix interval
replacement times, the assumptions in the construction of the underlying model
16
CHAPTER 1. INTRODUCTION
are actually quite limited and subsequently fails to capture many real world
problems in su cient detail.
Wang classified Barlow and Hunter’s model in 2002 and introduced three
replacement strategies which are: minimal repair, imperfect reparation and perfect
reparation [9].
In this section we review di↵erent strategies for replacement and select one
replacement strategy for our project.
The variables in table 1.1 are used in our set of possible maintenance strategies.
Variables Description
• Strategy 1:
• Strategy 2:
17
CHAPTER 1. INTRODUCTION
In this strategy, we replace the component at planned time tp regardless of
the component age. We also replace a component whenever it breaks down.
This strategy can be formulated as
where H(tp) indicate the failure replacement numbers in the time interval
(0,tp). To determine H(tp) we used renewal theory [43]. To obtain the above
equation, Chapman and Hall used the probability density function (PDF)
for the first failure, then they took inverse Laplace transform of the
function.
• Strategy 3:
The replacement occurs in this strategy if and only if the component age
comes to the planned times, we replace also the component if we have a
breakdown as usual, we determine the cost as:
That ⇢(t) indicates the density function and P(tp) is a probability of failure
before time tp.
Downtime
18
CHAPTER 1. INTRODUCTION
Downtime
1.5 Conclusions
As future work for ’replacement strategy’ part of this project we are going to
examine strategy 3 and compare the result in the di↵erent cases.
19
CHAPTER 2
Component health
2.1 Introduction
(2.1)
where r: number of
failures
D : numbers of components tested
Af is the acceleration factor derived from the Arrhenius equation that can be
calculated by [24]:
(2.2)
where
20
CHAPTER 2. COMPONENT HEALTH
• Hazard function h(t) is a calculation of failure rate over time interval (t2 t1)
• Failure rate function (t) shows the number of failures per unit of time and
it is related to Hazard function that its plot over time has the same shape
[25].
(2.5)
where f(t) indicate time to first failure [27] and R(t) = 1 f(t) is reliability
function.
• Degraded factor (A) The amount multiplied by mean time between failures
of a component to get the operational MTBF [27].
The ”Bathtub curve” Figure 2.1 is a well known curve used in reliability
engineering and illustrates a particular form of the hazard function.
21
CHAPTER 2. COMPONENT HEALTH
As The bathtub curve represents Mean Time To Failure is in phase with constant
failure rate that shows the predicted elapsed time between inherent failures of a
system during operation [27].
22
CHAPTER 2. COMPONENT HEALTH
To get a better understanding about the system health status, we calculate failure
rate, mean time between failure and time to fist failure by using equations Eq. 2.1,
Eq. 2.3 and Eq. 2.4.
Cooling system is an important system in the trucks engine. The system task is
cooling continuously the engine by circulating coolant liquid 2.
The cooling system consists of various parts such as: coolant, Idle roller,
compressor, belt tensioner, axeltapp, generator and ploy-V belt.
Figure 2.2 shows cooling systems parts in the typical Scania trucks (P-,G-,Rand
T-series).
23
CHAPTER 2. COMPONENT HEALTH
Figure 2.2: Cooling system parts for chassis types P-, G-, R-, T-series
To avoid lengthy calculations and to get a better analysis for the health of the
cooling system we chose the idler roller as a representative subcomponent.
where:
1 Shell 1
2 Bearing Housing 2
3 Shaft 1
5 Bearing 2
6 Female Labyrinth 2
Seal
10 Cover 2
24
CHAPTER 2. COMPONENT HEALTH
As our study shows ’Bearing’ (part number 5) is the most vulnerable part for
failure.
Table 2.2 in the next page represents failure rate,TTFF, MTBF and time to first
failure for the idler roller, the values calculated by Eq. 2.1, Eq. 2.3 and Eq. 2.4. To
compute the Idler Roller health status we used the Scania workshop manual and
warranty statistics (survival analysis).
Notation: The values in Table 2.2 has been hidden according to Scania’s security
policy for confidential data.
25
CHAPTER 2. COMPONENT HEALTH
26
CHAPTER 2. COMPONENT HEALTH
27
CHAPTER 2. COMPONENT HEALTH
(2.6)
where h(t) and R(t) denote hazard rate function and reliability function
respectively. According to ‘The dynamic reliability models for fatigue crack
growth problem’ see [30], we can rewrite hazard function as:
where 0 and A represent initial failure rate and degraded factor respectively and R0
is the initial reliability.
In other words, the dynamic reliability method represents a more realistic image
of the system, ability, risk and also safety.
In this section, we combine Eq. 2.6 and Eq. 2.7 to define a new equation for
dynamic reliability in a mechanical system. To obtain this new equation, we use
Riccati di↵erential equation.
28
CHAPTER 2. COMPONENT HEALTH
Riccati equation appears in the di↵erent areas of mathematics such as the theory
of conformal mapping [33], algebra and geometry.
) (2.8)
If r(x) = 06 , and if we accept u(x) as a solution for the di↵erential equation then:
(2.9)
29
CHAPTER 2. COMPONENT HEALTH
) (2.10)
then:
) (2.11)
By using Riccati di↵erential equation, we are able to create a general form for
dynamic reliability based on R(t). To obtain this we find R(t) from Eq. 2.7:
(2.12)
30
CHAPTER 2. COMPONENT HEALTH
) (2.13)
(2.14)
h1(t) = ( 0 + AR0)
then
(2.15)
31
CHAPTER 2. COMPONENT HEALTH
we consider:
then:
(2.16)
where R0 is initial reliability, 0 is initial failure rate which can be calculated by Eq.
2.1 and A indicate the degraded factor that can be find by fitting Eq. 2.16 with the
experimental data or simulation result. By using Eq. 2.16 the health promotion of
a mechanical system can be calculated [34].
Eq. 2.16 introduced by Yuo-Tern Tsai and Kuo-Shong Wan in April 2004 (see [35]
, page 91) as a dynamic reliability equation that depicts the degraded behavior of
component. The system reliability shows how far we are getting the particular
outcome for the given input with as much less wastage as possible. As future work
32
CHAPTER 2. COMPONENT HEALTH
we are going to use Eq. 2.16 for calculate the system reliability after every
preventive, corrective maintenance and also inspection.
2.6 Conclusions
In this chapter we introduced some new concepts such as failure rate, time to first
failure, etc. to get better understanding about the health of a component. We
selected cooling system as an example and calculated failure rate, time to first
failure, mean time to failure and mean time between failures for the idle roller
which is a subcomponent in cooling system.
As future work for this part of this project we are going to simulate the system
reliability for a Multi-component system.
In the next chapter we are going to find the best maintenance activity for cooling
system.
33
CHAPTER 3
Multi Criteria Fuzzy Decision Making (MCFD)
3.1 Introduction
The health of a system depends on the health of all the components that make up
the system. Since, there are various criteria that a↵ect the health on a mechanical
system, our decision making process become a multi-criteria decision making. We
consider the health of a system to be between 0 and 1 in this project. By health
equal to 0 we mean that the system fails and a health equal to 1 indicates a fully
healthy system. By this consideration, we are able to formulate our decision
making problem in a fuzzy environment.
There are di↵erent multi-criteria decision making techniques such as: AHP (The
Analytical Hierarchy Process),TOPSIS (The Technique for Order of Preference
by Similarity to Ideal Solution), SAW(Simple Additive Weighting), ELECTRE
(Elimination er Choice Translation Reality), SMART(The Simple Multi Attribute
Rating Technique) and ANP(The Analytical Network Process) for the problem
solving. In this project we use TOPSIS technique for decision making model
under fuzzy environment.
To reach this goal we need to identify all criteria that a↵ect the health of the
components. Some of these criteria are completely quantifiable, some partially
quantifiable, and others criteria are completely subjective. We need also to define
di↵erent maintenance activities, such as preventive, corrective, inspective and
imperfect maintenance.
34
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
The result from this chapter shows the best maintenance activities to perform on
each component, the policy of when to perform these activities is in fact the main
output from the simulation and optimization outlined in the next chapter.
Definition 6. A fuzzy set is a pair (A,m) where A is a set and m : A ! [0,1]. For each
x 2 A,m(x) is called the grade of membership of x in (A,m). For a finite set A =
{x1,...,xn}, the fuzzy set (A,m) is often denoted by {m(x1)/x1,...,m(xn)/xn}. Let x 2
A. Then x is called fully included in the fuzzy set (A,m) if m(x) = 1 and is called
not included if m(x) = 0. The set {x 2 A|m(x) > 0} is called the support of (A,m)
and the set is called a kernel. x is a fuzzy member if 0 < m(x) < 1, [38].
Definition 7. For any set X a membership function on X is any function from X to
the real unit interval [0,1], the membership function which represents a fuzzy set
A is denoted by µA. For an element x of X, the value µA(x) is called the membership
degree of x in the fuzzy set A, [39].
35
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
IFS distribute fuzzy sets for every membership function µ and non-membership
functions ⌫ where ⌫ = 1 µ.
a b a!b
0 0 1
0 1 1
1 0 0
1 1 1
Table 3.1: Binary implication
36
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
,
(3.2)
. (3.3)
37
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
TOPSIS method is based on two main solutions: 1- The positive ideal solution
which has the best attribute values 2- The negative ideal solution which has the
worst attribute values.
TOPSIS measures the geometric distance between all alternatives, positive and
negative ideal solutions and selects the best one. The best alternative is an
alternative which has the shortest distance from positive ideal solution and also
the farthest distance from the negative ideal solution [44].
TOPSIS method consists of six steps. We assume a decision making problem that
has m alternatives and n criteria. The steps can be performed as follow:
Step 1 :
Obtain m alternatives and n criteria, create the evaluation matrix with m rows and n
columns. Obtain the intersections of alternatives and a criteria, define it as xij,
standardization xij as matrix (xij)m⇥n.
Step 2 :
Create a set of weight for the criteria, define it as wn, normalize (xij)m⇥n.
Step 3 :
Identify the positive ideal solution, show it as A+ Identify the
negative ideal solution show it as A .
Step 4 :
38
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Measure the distance between all criteria and A+, define it as: D+ Measure the
distance between all criteria and A , define it as: D .
Step 5 :
Determine the ranking index ( pi) of each alternative, calculate pi by:
Step 6 :
Order the rank of alternatives according to proportion in step 5.
C1 C2 C3 C4 C5
M1 M2 M3
Decision
Figure3.1:TOPSISillustration
As we see, every criterion a↵ects every single alternative. With a set of alternatives
and criteria, the decision maker can make for example three decisions as follow:
C1 M1,M2,M3 M2,M1 M1 M3
C3 M1,M2,M3 M1,M3 M3 M1
39
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
C4 M1,M2,M3 M3,M2 M2 M2
C5 M1,M2,M3 M2 M2 M3
Table 3.2: Decision Making by TOPSIS
cision Making
We now use the TOPSIS method to calculate the distance between A+f and Af .
Assume that we have a set of criteria C and a set of alternatives M:
C = {C1,C2,...,Cm}
M = {M1,M2,...,Mn}
According to [40] we assume that the alternatives and criteria are represented
(using IFS) as:
M1 = {(C1,µ1,1,⌫1,1),(C2,µ1,2,⌫1,2),...,(Cm,µ1,m,⌫1,m)}
M2 = {(C1,µ2,1,⌫2,1),(C2,µ2,2,⌫2,2),...,(Cm,µ2,m,⌫2,m)}
...
40
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Mn = {(C1,µn,1,⌫n,1),(C2,µn,2,⌫n,2),...,(Cm,µn,m,⌫n,m)},
where µi,j indicates the degree by which the alternative Mi satisfies criterion Cj,
⌫i,j indicates the degree by which the alternative Mi does not satisfy criterion Cj.
A +f = {(C1,Max{µi,1},min{⌫i,1}),
(C2,Max{µi,2},min{⌫2,m}),
...
(Cm,Max{µi,m},min{⌫i,m})}.
Af = {(C1,min{µi,1},Max{⌫i,1}),
(C2,min{µi,2},Max{⌫i,2}),
...
(Cm,min{µi,m},Max{⌫i,m})}.
To calculated the distance between alternatives A+f and Af we define two inclusion
degrees as follows:
Definition 12. The inclusion degree D+(Mi) of the positively ideal solution in
alternative Mi and the inclusion degree D (Mi) of the negatively ideal solution in
alternative Mi are respectively defined as
41
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
where I denotes the inclusion degree function, see Equation (2). Definition 13.
The ranking index of alternative Mi is defined as
(3.7)
where 0 pi 1.
If there exists i0 2 {1,2,...,n} such that pi0 = Max{p1,p2,...,pn} then the alternative
Mi0 is the best alternative, [40].
To use MCFD for solving our problem we need to identify all criteria that e↵ect
on the component’s health.To perform this process, we analyzed various
components in Scania’s technical module system with the help of engineers at
Scania R&D. In table 3.3 we classify the 15 most important criteria with high
e↵ectivity on a mechanical system.
Number Description
C1 Calendar time
C2 Mileage
C3 Chassis load and strength
C4 Material operation
C5 Components health status
C6 Humidity
C7 Temperature
C8 Quality of roads
42
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
C9 Road dust
C10 Usage
C11 Fuel quality
C12 Driving styles
C13 Environment
C14 Speed
C15 Transport tasks
Table 3.3: The Criteria
43
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
To define the maintenance activities, we need to study the company policies, the
customer’s perspective and requirements, which depend on the company’s task
operating systems.
Let us assume that M1,M2,M3 are three maintenance alternatives which indicates
corrective maintenance, imperfect maintenance and preventive maintenance
respectively.
To identify the criteria with the highest influence we need knowledge of the
mechanical properties of the component. We use table 3.3 to choose related criteria
with the highest impact on the cooling system’s health.
Let C1,C2,C3 and C4 be the criteria that represent mileage, temperature, time and
humidity.
As a decision maker, we want to find which of the alternatives Mi that best satisfy
the criteria C1 and C2 or just C3, according to the customers perspective and the
company’s policies.
M1 = {(C1,(0.5,0.6)),(C2,(0.5,0.1)),(C3,(0.2,0.4)),(C4,(0.1,0.5))} M2 =
{(C1,(0.5,0.6)),(C2,(0.5,0)),(C3,(0.3,0.6)),(C4,(0.5,0.2))}
M3 = {(C1,(0.6,0.2)),(C2,(0.4,0.3)),(C3,(0.2,0.3)),(C4,(0.4,0.1))}
To find the above values, we studied Scania’s survival analysis - warranty data. In
this study we compared the cooling system e ciency in the di↵erent regions.
To estimate the exact coe cients for these relationships we need to perform an
accurate data mining with some suitable tool such as RapidMiner.
44
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
A+1+ == {{((CC13,,(0(0..63,,00..2))3)),}(C2,(0.5,0))}
A2
A1 = {(C1,(0.5,0.6)),(C2,(0.4,0.3))}
A2 = {(C3,(0.2,0.6))}
where the fist elements in the and are maximum values and the second
elements are minimum values. It means that a positive ideal solution is a set of
elements that have maximum values and a negative ideal solution is a set of least
value.
We then calculate the inclusion degree function by using Eq. 3.2, but before that
we need to calculate RL by using Eq. 3.4 (Lukasiewicz implication):
C1
Note that is an optimal value between 0 and 1, we determine = 0.5 in this example
and |U| is the cardinality of U which is |U| = 2.
45
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
then:
M11 M21 M31
By using Eq. 3.5 and Eq. 3.6 we can calculate the inclusion degrees D+(Mi) and D
(Mi):
The ranking index of alternatives (pi) can be calculated by using Eq. 3.7 as:
46
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
3.9 Conclusions
In this chapter we define our problem in fuzzy environment and used the TOPSIS
method for the decision making. As we mentioned earlier the most advantage of
using fuzzy logic is flexibility.
We identified fifteen criteria with direct e↵ect on the cooling system’s health and
defined some alternative as maintenance activities in section 3.7.
47
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
As future work for the decision making part of this project we are going to
examine some other multiple-criteria decision methods such as: value analysis
(VA) fuzzy, VIKOR method and weighted product model.
Until now we defined di↵erent maintenance activities, analyze the system health
and making decision to choose the best alternative. In the next chapter we
analyzed di↵erent optimization methods to find a suitable algorithm for
optimization.
48
CHAPTER 4
Optimization Methods
4.1 Introduction
Most of the optimization algorithms and methods have been developed and
improved from their previous versions. As discussed in [10] the solutions that
generated by most of the optimization algorithms, cannot be used for problems
that have a combination of discrete and continuous variables. Those algorithms
mostly provide local optimization as final solutions.
In this chapter, we will look at di↵erent ways to solve multi objective design
optimization problems and also find the best optimization algorithm to solve our
problem that provides a global optimization.
49
CHAPTER 4. OPTIMIZATION METHODS
There are two approaches to solve this problem: classical optimization techniques
and intelligent(Non-classical) optimization techniques.
50
CHAPTER 4. OPTIMIZATION METHODS
Figure 4.1: Point ’A’ shows a global optimization and the other points indicate a local
optimization [12]
Point B refers to a local optimization, because it’s the highest place in their area.
In other words, the peak of mountain always indicates a local optimization[13].
As we see in Figure 4.1, there are several mountains; the highest mountain always
is the global optimization, which is point A in this example.
At find the global optimum point is the challenge in many cases. Metaheuristic
algorithms are new solvers which designed to find the global optimum for
51
CHAPTER 4. OPTIMIZATION METHODS
1. Evaluation
2. Optimization
52
CHAPTER 4. OPTIMIZATION METHODS
s.t. !
g(x) go
In fact, for solve this problem; we need to run two algorithms parallel with
di↵erent tasks. The first algorithm’s task is finding the value of ↵, and the second
algorithm trying to optimizes the objective function f(x).
↵1 Alg 21 x⇤
1
Alg 1 ↵2 Alg 22 x 2⇤
... ↵m Alg 2m x⇤
m
53
CHAPTER 4. OPTIMIZATION METHODS
Genetic algorithm (GA), artificial bee colony algorithm, ant colony optimization
algorithms, evolution strategy (ES) and imperialist competitive algorithm are
some examples of evolutionary algorithms.
4.5 Conclusions
54
CHAPTER 5
Genetic Algorithm
5.1 Introduction
5.2 Structure
Before we begin describing the structure of GA, we need to define the solution
encoding which called ‘chromosomes’ in GA’s concept. In fact, the chromosomes
are a set of parameters which propose a solution to the initial problem and they
are represented as a simple string[19]. The design of the chromosome and the
parameters depends on the initial problem. We describe our design for both
chromosomes and parameters in chapter 5.
55
We summarize the structure of genetic algorithm as:
1. fitness function
5. random mutation
After crossover
Figure 5.1: Genetic code of the parents and the o↵spring before and after the
crossover
56
• Prediction of weather
By using GA, we avoid the local optimization and have a chance to finding a
global optimization solution for the problem and the solution becomes better and
better with time.
By using GA, the fitness functions are able to be changed from iteration to
iteration that allows incorporating new data in the model if it becomes available
[21].
5.4 Conclusions
The goal of this chapter was introduction to genetic algorithm. We fined genetic
algorithm as a robust, fast and suitable optimization algorithm to optimize our
problem. We analyzed the structure of genetic algorithm and discussed some
57
advantage of using genetic algorithm in this chapter. In the next chapter we
implement genetic algorithm in our maintenance optimization model.
As future work for optimization part of this project we are going to examine
another optimization algorithms such as: ant colony optimization algorithm,
artificial bee colony algorithm and natural evolution strategies algorithm.
CHAPTER 6
Maintenance Optimization Model
6.1 Introduction
58
6.2 Simulation Framework Model
59
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
and a description of a system in which all events fill up a whole number of such
time-steps. By making t su ciently small, such a model can describe the system
with arbitrary precision. We consider a time horizon of T such discrete time units.
At each point in time the system is described in all important aspects by a state
vector S 2RM, the current state includes the system time (multiple of t) and any
variables describing the components of the system.
Decidor
DecisionData DecisionMethods
Figure6.1:ThestructureofaDecidor
For the rationale of this simulation system we also assume that if several plannable
and random events compete to run at a specific time, the order in which they are
executed has limited e↵ect on the results of the simulation. We also assume that
60
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
the plannable events, when executed, has no significant increasing e↵ect on the
probability of the random events to be triggered in the next time-step.
Consider Figure 6.2 describing the major structure of our simulation algorithm.
Pi D Ri
t=0 t T
pi r¯i ri r¯i
We begin by setting the current state to the initial state and in each iteration of the
algorithm we first execute all plannable events Pi for which pi(S) = True, we then
randomize the level ¯ri at which the random events will trigger in the future,
execute the default event D, execute each random event Ri for which the current
level exceeds the trigger level (ri r¯i), and we iterate this until our termination-
time T is reached. A more detailed description can be found in the following
algorithm.
1: S Sinitial
2: while t T do
5: end for
6: for all i do
7: r¯i sample from uniform distribution on [ri(S),1]
8: end for
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
9: S D(S)
Note that the state S may change for subsequent i in the for-loops at lines 3 and
10 and our implementation iterates through i in increasing order, of course the
values taken of i depends on the number of plannable and random events
respectively.
Normally, since we have randomly triggered events, we would need to run this
simulation a number of times, perhaps 10 to 1000 repetitions, and collect statistics
for the state variables in S as they develop over time. In our implementation we
gather the first two moments of all state variables for each point in time, that is,
their mean value and their mean square value.
The framework model and simulation algorithm in the previous sections allows
for easy modeling of many real-world systems. In a specific model one of the
most important decisions to make is the time discretization t. Making it too small
will increase the run-time of any simulation and making it too large will introduce
greater error in the simulation output and subsequent decisions based on the
output. Another key factor is choosing specific ways in which to encode the
decidors pi. One of the simplest decidors, which we have used in this paper and
specifically in our example problem class, is the linear decidor.
A linear decidor pi in our framework model is a real vector pi 2RM such that the
outputted decision value, True or False, is equal to the truth value of the statement
62
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Since there is only one type of decidors (one decision method) in our example
problem class and data for all decidors are real vectors of the same size, we may
describe the decision process in a more compact form. Let M be the matrix defined
by
D = MS,
Our simulation algorithm however, will not in the current version execute all such
decided plannable events, but only in e↵ect iterate i from 1 to N and if di 1 perform
the event and in anticipation of the next value for i recompute D. A variation to
this method could be to calculate D, then execute the event with greatest di, then
recalculate D and again pick the event with the greatest di, until all di < 1, in which
case the simulation would continue to execute the default event as usual. This
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
method will add overhead since the matrix multiplication may need recalculation
several times, but might add extra decision intelligence by letting the decidors
have di↵erent degrees of preference for the execution of di↵erent plannable
events.
For each component we allow for any number of randomly triggered events Ri.
Each such event has a probability distribution rij which is a Weibull distribution
with a randomly chosen scale ( ) and shape (k). To compute the probability levels
as functions of the state r(S) we introduce a linear dependency on the independent
state variables associated with the component (ai · S). That is, r(S) = r(ai · S).
For each component we also allow for a number of dependent state variable in a
similar fashion, eij. These state variables are computed from the current
independent state variables associated with the component by a function f(S) =
f(vi·S), where f is a randomly chosen Weibull distributions and vi is
64
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
suitable randomly chosen dependency vector. We interpret these dependent
variables as a measure of e ciency for some function of the component which soon
will be seen to a↵ect simulated profit.
What remains in our example class is to define, actually parameterize, the actual
event functions Pi,Ri,D.
We generate the plannable and the random events Pi,Ri in the same fashion: S MS
+ w, where M is a matrix of suitable dimensions and w a vector, both chosen
randomly such that the new values for the dependent state values associated with
the corresponding component are free to change, but only dependent on the old
values for these state variables. The only other state variables allowed to change
is the time t and the current profit, time must be advanced by an integer amount
and the profit must be increased by a constant term. The default event D is
somewhat di↵erent. It may add a constant term to any dependent state for any
component s, the current profit must increase by addition of a term p and may also
decrease linearly by a factor ce multiplied by any dependent state variable (e
ciency measure) and the time must increase by 1 ( t).
PiR iD Decidor
Simulation
Fitness
Figure6.3:Illustrationofthemodel
65
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.5 A Numerical Example
We selected the following example system, randomized from our stated problem
class. The number of components is Nc = 2. Horizon time is T = 504. The first
component has two random events (failures) and one independent state variable.
The second component has only one random event and also one independent state
variable. Both components have two e ciency measures.
k
a M w t
Component Event M w t p
1 1 0 0 1 -1
1 2 0 0 1 -1
2 1 0 0 1 -1
Table 6.2: Parameters defining plannable events.
Component E ciency k v
66
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Table 6.3: Parameters defining eciency measures.
Figure 6.4: Results from optimization of linear decidors in the example problem.
In Figure 6.4 the expected value for the relevant state variables from 2000
iterations of a simulation of the example problem is shown.
The linear decidors of the system (plannable events) was optimized using a
genetic algorithm with the expected profit at the time horizon (T) as fitness
function. All 6 graphs are rescaled to lie between 0 and 1 to allow placement in
the same figure.
The two green curves shows the two independent state variables (one for each
component). The red spikes represent a high probability of having a plannable
event at those points in time. Where there is an increased probability of the
plannable events to occur there is also a visible drop in the corresponding state
67
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
variable because M and w are both 0 for all plannable events, e ciently resetting
the states (component age) to zero.
Figure 6.5 in the next page shows the convergence of the expected profit at the
end of horizon T for the genetic algorithm used. We begin with an expected profit
of 377 and after 272 generations of the optimization we have reached a fairly long
plateau beginning at a profit of 421 and we terminate the algorithm after 512
generations with a profit of 423.
Figure 6.5 shows convergence of expected profit at the end of horizon T for the
genetic algorithm used to optimize the linear decidors in the example problem.
68
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
We summarize our algorithm as a flowchart in the next page. This approach allows
a better understanding of the process.
69
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
70
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.6 Conclusions
In section 6.4 we explained a specific model and introduced the linear decider as
a type of decidors and formulated the decision process in a compact form. We
finished this chapter with a numerical example by using genetic algorithm and
analyzed the result.
The main goal of this chapter was creating a simulation approach to maintenance
optimization with the benefit of a capability to optimize more complex systems.
71
CHAPTER 7
Summary, Conclusions and Future
Work
7.1 Summary
As was discussed in the abstract, this work is split into two separate parts, one
dealing with decision making (chapter 3) and one dealing with maintenance
optimization model (chapter 6). The aim has been to create an optimization model
for maintenance problem. The component health has been examined in chapter 2.
Several criteria have been identified for DSS in chapter 3 and a maintenance
decision has been found. We analyzed di↵erent optimization methods and
algorithm in chapter 4 and Genetic algorithm had been chosen as a suitable
optimization algorithm (see chapter 5). The maintenance plan, algorithm,
simulation and optimization are summarized in chapter 6 and then in section 6.5
the results of the numerical example is discussed.
7.2 Conclusions
72
One good way to transition from the decidors, and the statistics collected for their
decisions on a given system, is the realization that decisions closer in time are
more readily understandable and predictable. As a system evolves, the inherent
stochastic nature makes for example precise timing of maintenance activities
more and more di cult. In light of this we may consider optimizing a plan for the
close future in which activities are decided and fixed, and will be performed unless
unexpected events occur, in which case a new good close future plan must be
found. For a one-component system such a short-time plan may be based on ‘time
to first intervention’, where we for all possible times to first the plannable event
consider enforcing the plan, and in the simulations, if unexpected behavior occur,
fall back on the decidors to evaluate a near optimal ‘just in time’ decision policy
for the future. This provides the decision maker with a nice single-value (profit)
expected result for planning the first intervention on di↵erent times, and may
apply additional non-modeled preferences on the decision based on these
additional simulations.
There are three main lines of this research which should be pursued. Firstly, the
work on the criteria that e↵ect on the component health. It is our intention to
investigate thoroughly the various criteria, with a view to measuring how di↵erent
criteria a↵ect the di↵erent components. To obtain this goal we need to have an
accurate data mining with some tools such as Rapid Miner. A second line, which
follows from chapter 2, is to investigate and calculate the system reliability by the
equations that we introduced and proved in chapter 2. As a last line, we are going
to examine the result by di↵erent optimization algorithm such as artificial bee
colony algorithm, Memetic algorithms etc.
CHAPTER 8
73
Summary of reflection of objectives in the thesis
We summarize our views of the objectives achieved in this thesis in this chapter.
At the research level, I recognized that the initial problem is a multi- objective
optimization problem, and therefore I transfered the problem to a fuzzy
environment, I find 15 criteria which a↵ect the component health. I also analyze
the structure of the component and also studied a wide range of articles in di↵erent
areas of multi-criteria decision making, then I find TOPSIS as a suitable method
for DSS.
In writing this thesis I find two di↵erent approaches for decision making. The first
approach is a combination of fuzzy decision making and TOPSIS method. As
second approach, I used genetic algorithm for optimization.
74
For the reader get a better understanding about the background to the chapters I
began every chapter with a short introduction and definitions.
The sources that I used in the thesis was obtained by searches at Ma¨lardalen
University computer system and also journal articles which I found in the
University library and through Google search.
The execution phase contains design, make and test the initial algorithm with
Matlab. The implementation phase, is a collaboration between me and Jonas
Osterberg, the initial algorithm was implemented in JavaScript.¨
In section 2.5.1 I was used Riccati di↵erential equation to finding Eq. 2.16 which
is a combination of two equations that was introduced by Yuo-Tern Tsai and Kuo-
Shong.
75
THE THESIS
I believe that this thesis is understandable for readers who have a moderate
familiarity with English and moderate knowledge in optimization topics. The
reader need to have a good knowledge in DSS to understand the di↵erent methods
that we used in this project for decision making (such as TOPSIS and fuzzy
decision making).
I assigned chapter 1to definitions and terms and also tried to define all concepts
which are required to understand the topic. In this project, I also bring some
numerical example to get a better understanding.
I explained the future work in the conclusions section, which is to explore the use
of data mining with Rapid Miner to get a more accurate model for the health of
components.
This thesis was a part of the IRIS-project, (in Swedish: Integrerat dynamiskt
prognostiserande underh˚allsst¨od) which is a five-year project in Scania’s
Research and Development department in Sweden. The approval process for this
project at Scania involves review and approval at each phase of project
development.
To experience the results, we had scheduled meeting with the IRIS-project team
every week and also with my supervisor Dr. Jonas Biteus who is project leader
for the IRIS-project.
I also presented this project in the ICNPAA 2014 Congress [47] in Norway
(Mathematical problems in engineering, aerospace and sciences). The at this point
unpublished congress article has the title: ‘ Solving complex maintenance
76
planning optimization problems using stochastic simulation and multicriteria
fuzzy decision making ‘ and was written under supervision of professor Sergei
Silvesterov, Jonas Osterberg and Dr. Jonas Biteus.¨
APPENDIX A
Java code - Event Triggers
In this appendix we list the three types of event triggers used in our simulations.
LinearTrigger: This trigger returns a True decision if the linear combination of the
state variables exceeds 1. The coe cients for the linear combination are stored in
the val array, and these values will change on a mutate or mate method call.
77
APPENDIX A. JAVA CODE - EVENT TRIGGERS
1 package com.vuebe.vhen;
2
3 import java.util.Random;
4
5 public abstract class Trigger {
6 public static Random random = new Random();
7 public Event event;
8
9 public abstract void prepare(State state);
10
11 public abstract boolean decide(State state);
12
13 public Trigger mutate(double p) {
14 return this;
15 }
16
17 public Trigger mate(Trigger b, double interpol) {
18 return this;
19 }
20
21 public Trigger random() {
22 return this;
23 }
24 }
25
26
27
28 package com.vuebe.vhen;
78
APPENDIX A. JAVA CODE - EVENT TRIGGERS
29
30 public class AlwaysTrigger extends Trigger {
31
32 public AlwaysTrigger(Event event) {
33 this.event = event;
34 }
35
36 @Override
37 public void prepare(State state) {
38 }
39
40 @Override
41 public boolean decide(State state) {
42 return true;
43 }
44
45 }
46
47
48 package com.vuebe.vhen;
49
50 public abstract class StochasticTrigger extends Trigger {
51 private double value;
52 private double triggervalue;
53
54 public StochasticTrigger(Event event) {
55 this.event = event;
56 }
79
APPENDIX A. JAVA CODE - EVENT TRIGGERS
57
58 public abstract double value(State state);
59
60 @Override
61 public void prepare(State state) {
62 value = value(state);
63 if (value < 0)
64 value = 0;
65 if (value > 1)
66 value = 1;
67 triggervalue = value + random.nextDouble() * (1 - value);
68 }
69
70 @Override
71 public boolean decide(State state) {
72 value = value(state);
73 if (value < 0)
74 value = 0;
75 if (value > 1)
76 value = 1;
77 return value >= triggervalue;
78 }
79 }
80
81
82
83 package com.vuebe.vhen;
84
80
APPENDIX A. JAVA CODE - EVENT TRIGGERS
85 import java.util.Random;
86
87 import com.vuebe.util.Util;
88
89 public class LinearTrigger extends Trigger {
90 public double[] val;
91 public double[] mask;
92
93 public LinearTrigger(Event event, double[] val, double[] mask) {
94 this.event = event;
95 this.val = val;
96 this.mask = mask;
97 }
98
99 public LinearTrigger(Event event, double[] val) {
100 this.event = event;
101 this.val = val;
102 this.mask = new double[val.length];
103 setMask();
104 }
105
106 public LinearTrigger(Event event, State S) {
107 this.event = event;
108 this.val = new double[S.val.length];
109 this.mask = new double[val.length];
110 setMask();
111 }
112
81
APPENDIX A. JAVA CODE - EVENT TRIGGERS
82
APPENDIX A. JAVA CODE - EVENT TRIGGERS
83
APPENDIX A. JAVA CODE - EVENT TRIGGERS
169 }
170 return new LinearTrigger(event, vm, mask);
171 }
172
173 @Override
174 public void prepare(State state) {
175 }
176
177 @Override
178 public boolean decide(State state) {
179 double sum = 0;
180 for (int i = 0; i < val.length; i++) {
181 sum += val[i] * state.val[i];
182 }
183 return sum >= 1;
184 }
185
186 public String toString() {
187 return "val " + Util.toString(val) + " mask " + Util.
toString(mask);
188 }
189 }
84
APPENDIX B
Java code - Genetic Algorithm
In this appendix we list the Java code for the genetic algorithm used in our
simulations.
1 package com.vuebe.ga;
2
3 public interface Individual extends Comparable<Individual> {
4
5 /** Compute the value of this individual. */
6 public double evaluate();
7
8 /** Return the computed value of this individual. */
9 public double getValue();
85
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
10
11 public Individual mutate(double p);
12
13 public Individual[] mate(Individual mate);
14
15 public Individual random();
16 }
17
18
19
20 package com.vuebe.ga;
21
22 import java.util.Collections;
23 import com.vuebe.vhen.*;
24 import java.util.Random;
25 import java.util.Vector;
26
27 import com.vuebe.util.Util;
28
29 public class GA {
30 public static final int THREADCOUNT = 8;
31
32 public static final int TOURNAMENT = 0;
33 public static final int ROULETTE_WHEEL = 1;
34
35 private int Ne = 2; // Elite individuals
36 private int Ns = 50; // Selection individuals
37 private int N = Ne + Ns; // Population size N=Ne+Ns
86
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
57 best = value;
58 ind = ind2;
59 }
60 }
61 return ind;
87
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
62 }
63
64 public void setSelectionSize(int ns) {
67 this.Ns = ns;
68 this.N = Ns + Ne;
69 }
70
71 public void setEliteSize(int ne) {
72 if (ne < 0)
74 this.Ne = ne;
75 this.N = Ns + Ne;
76 }
77
78 public int getGeneration() {
79 return generation;
80 }
81
82 public int getEvaluations() {
83 return evaluations;
88
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
84 }
85
86 public void initiatePopulation(Individual seed) {
87 population = new Vector<Individual>();
88 evaluations = 0;
89 for (int i = 0; i < N; i++) {
90 Individual individual = seed.random();
91 individual.evaluate();
92 evaluations++;
93 population.add(individual);
94 }
95 }
96
97 public void sort() {
98 Collections.sort(population);
99 }
100
101 public void evolve() {
102 // Util.dump("Evolve");
103 sort();
104
105 String s = "";
106 for (int i = 0; i < population.size(); i++) {
107 s += population.get(i).getValue() + " ";
108 }
109 // Util.dump("Values: " + s);
110
111 Vector<Individual> nextpopulation = new Vector<Individual >();
89
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
112
113 // Selection
114 if (selectionmethod == TOURNAMENT) {
115 for (int tourn = 0; tourn < Ns / 2; tourn++) {
116 Vector<Individual> tour = new Vector<Individual>();
117 for (int i = 0; i < Nt; i++) {
118 tour.add(population.get(random.nextInt(population. size())));
119 }
120 Collections.sort(tour);
121
122 Individual first = null;
123 Individual second = null;
124 int firstindex = -1;
125 combine: while (true) {
126 double P = Pt;
127 for (int i = Nt - 1; i >= 0; i--) {
128 if (i != firstindex && random.nextDouble() < P) {
129 if (first == null) {
130 first = tour.get(i);
131 firstindex = i;
132 } else {
133 second = tour.get(i);
134 Individual[] children = first.mate(second);
135
136 children[0] = children[0].mutate(Pmut);
137 children[1] = children[1].mutate(Pmut);
138
139 nextpopulation.add(children[0]);
90
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
140 nextpopulation.add(children[1]);
141
142 children[0].evaluate();
143 children[1].evaluate();
144
145 evaluations += 2;
146
147 break combine;
148 }
149 }
150 P = P * (1 - Pt);
151
152 }
153 }
154
155 }
156 } else
91
APPENDIX B. JAVA CODE - GENETIC ALGORITHM
164 }
165
166 generation++;
167 population = nextpopulation;
168 }
169 }
92
APPENDIX C
Java code - Simulation Classes
In this appendix we list the Java code for simulation classes used in our simulations.
State: The Represents a system state consisting of a set of double values (real
numbers) in the val field. The state variables have names by the corresponding
array values in the name field. The dep array provides the possibility to have some
state variables dynamically computed as a function of the other state variables.
Event: The superclass of all events that may occur in the simulation, the events only
result is to change the values of the state variables in the run method.
StateFunction: Used in the State class to provide the possibility for dynamically
computed state variables.
System: This is the main simulation class realizing the simulation algorithm. The
system is defined by the initialstate field and the set of triggers (and the event they
trigger) in the triggers field. The toMatlab method creates a string in Matlab
syntax constructing the experimental data constructed from a number of
simulations performed on this system from the simulate method. The class also
provides mutation and mating methods since the class can be used as an individual
in the genetic optimization algorithm. The main simulation algorithm is contained
in the simulateOne method, performing a simulation of the system over the entire
time horizong horizon. This method is a bit more generally written than the
algorithm described previously in the report for experimentation purposes and
easability for adaptation to more complex scenarios. The algorithm, given a
correctly ordered list of event triggers, works equivalently except for a short time
step time di↵erence.
1 package com.vuebe.vhen;
2
93
APPENDIX C. JAVA CODE - SIMULATION CLASSES
3 import java.util.Hashtable;
4
5 public class State {
6 public Hashtable<String, Integer> ind = new Hashtable<
String, Integer>();
7 public String[] name;
8 public double[] val;
9 public StateFunction[] dep;
10
11 public State(State s) {
12 ind = (Hashtable<String, Integer>) s.ind.clone();
13 val = s.val.clone();
14 dep = s.dep.clone();
15 name = s.name;
16 }
17
18 public State(String... vars) {
19 for (int i = 0; i < vars.length; i++) {
20 if (ind.put(vars[i], i) != null) {
21 throw new RuntimeException("Duplicate variable");
22 }
23 }
25 this.name = vars.clone();
26 }
27
94
APPENDIX C. JAVA CODE - SIMULATION CLASSES
95
APPENDIX C. JAVA CODE - SIMULATION CLASSES
56
57
58
59 package com.vuebe.vhen;
60
61 public abstract class Event {
62 public abstract void run(State state);
63 }
64
65
66
67 package com.vuebe.vhen;
68
69 public abstract class ValueFunction {
70 public abstract double value(System system);
71 }
72
73
74
75 package com.vuebe.vhen;
76
77 public abstract class StateFunction {
78 public abstract double value(State state);
79 }
80
81
82
83 package com.vuebe.vhen;
96
APPENDIX C. JAVA CODE - SIMULATION CLASSES
84
85 import java.util.Random;
86 import java.util.Vector;
87
88 import com.vuebe.ga.Individual;
89 import com.vuebe.util.Util;
90
91 public class System implements Individual {
92 public static Random random = new Random();
93
94 private State initialstate;
95 private int horizon;
96 private ValueFunction valuefunction;
97 public Vector<Trigger> triggers = new Vector<Trigger>();
98 public Vector<String> triggername = new Vector<String>();
99
100 // derived or simulated
101 int itime;
102
103 public int N;
97
APPENDIX C. JAVA CODE - SIMULATION CLASSES
98
APPENDIX C. JAVA CODE - SIMULATION CLASSES
139 sb.append("V={");
140 for (int i = 0; i < M.length; i++) {
141 sb.append("V" + i);
142 if (i != M.length - 1)
143 sb.append(",");
144 }
145 sb.append("}\n");
146
147 for (int i = 0; i < C.length; i++) {
148 sb.append("C" + i + "="
149 + Util.toMatlab(Util.multiplyNew(C[i], 1d / N)));
150 sb.append(";\n");
151 }
152
153 sb.append("C={");
154 for (int i = 0; i < C.length; i++) {
155 sb.append("C" + i);
156 if (i != C.length - 1)
157 sb.append(",");
158 }
159 sb.append("}\n");
160
161 sb.append("triggername={");
162 for (int i = 0; i < C.length; i++) {
163 sb.append("’" + triggername.get(i) + "’");
164 if (i != C.length - 1)
165 sb.append(",");
166 }
99
APPENDIX C. JAVA CODE - SIMULATION CLASSES
167 sb.append("}\n");
168
169 return sb.toString();
170 }
171
172 public State getInitialState() {
173 return initialstate;
174 }
175
176 public Trigger getTrigger(int index) {
177 return triggers.get(index);
178 }
179
180 public void setInitialState(State S) {
181 initialstate = S;
182 itime = initialstate.index("time");
183 }
184
185 public void setValueFunction(ValueFunction vf) {
186 this.valuefunction = vf;
187 }
188
189 public void setHorizon(int horizon) {
190 this.horizon = horizon;
191 }
192
193 public void setSimulationCount(int simulationcount) {
194 this.simulationcount = simulationcount;
100
APPENDIX C. JAVA CODE - SIMULATION CLASSES
195 }
196
197 public int getHorizon() {
198 return horizon;
199 }
200
201 public int addTrigger(Trigger trigger, String name) {
202 triggers.add(trigger);
203 triggername.add(name);
204 return triggers.size() - 1;
205 }
206
207 public void dump() {
208 Util.dump("Initial: " + Util.toString(initialstate.val));
209 Util.dump("Horizon: " + horizon);
210 for (int i = 0; i < triggers.size(); i++) {
211 Util.dump("Trigger " + i + ": " + triggers.get(i).
toString());
212 }
213 }
214
215 private void simulateOne() {
216 State S = new State(initialstate);
217 S.computeDependent();
218 Random rand = new Random();
219 S.val[itime] = 0;
220 while (S.val[itime] < horizon) {
221 // prepare triggers
101
APPENDIX C. JAVA CODE - SIMULATION CLASSES
102
APPENDIX C. JAVA CODE - SIMULATION CLASSES
103
APPENDIX C. JAVA CODE - SIMULATION CLASSES
104
APPENDIX C. JAVA CODE - SIMULATION CLASSES
306 s2.triggers.add(triggers.get(i).mate(mate.triggers.get(
i),
307 0.5 - interpol));
308 }
309 return new System[] { s1, s2 };
310 }
311
312 public System cloneRandom() {
313 System s = cloneNoTriggers();
314 for (int i = 0; i < triggers.size(); i++) { 315
s.triggers.add(triggers.get(i).random());
316 }
317 return s;
318 }
319
320 @Override
321 public int compareTo(Individual ind) {
322 return (int) Math.signum(getValue() - ind.getValue());
323 }
324
325 @Override
326 public double evaluate() {
327 simulate();
328 return value;
329 }
330
331 @Override
332 public double getValue() {
105
APPENDIX C. JAVA CODE - SIMULATION CLASSES
106
APPENDIX D
Java code - Example Problem Class
In this appendix we list the Java code for the construction and analyzis of instanses
of the example problem class, Section 6.4.
TestFactory: The class is a main program entry point through the main method
and subsequently the run method. A SystemFactory object is first constructed,
some parameters specifyed for the desired system and the system is then created
by the createSystem method call. This created system is then fed as the seed
Individual to a genetic algorithm by the ga.initiatePopulation(sys); statement. The
genetic algorithm is then run 10000 times and the best individual after the
optimization is saved to file in Matlab format for analyzis and plotting in Matlab.
1 package com.vuebe.vhen;
2
3 import java.util.Random;
4 import java.util.Vector;
5
6 import com.vuebe.math.WeibullDistribution;
7 import com.vuebe.util.Util;
8
9 public class SystemFactory {
10 public Random random = new Random(0);
107
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
11
12 // T=T1*T2*T3, hours days weeks
13 public int T;
14 public int T1 = 24;
15 public int T2 = 7;
16 public int T3 = 3;
17
18 // Number of components
19 public int NC;
20
21 // number of state variables per component
22 public int sv0;
108
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
109
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
64
110
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
92 Util.dump("T=" + T);
93 Util.dump("NC=" + NC);
94 for (int i = 0; i < c.length; i++) {
95 Util.dump("COMPONENT " + i);
96 c[i].dump();
97 }
98 // Normal Event
99 Util.dump("------");
100 Util.dump("NEfitness=" + NEfitness);
101 Util.dump("NEefficiencyfitness=" + Util.toString(
NEefficiencyfitness));
102 Util.dump("NESvadd=" + Util.toString(NESVadd));
103 }
104
105 public System createSystem() {
106 System sys = new System();
107 T = T1 * T2 * T3;
108
109 sys.setHorizon(T);
110 c = new Component[NC];
111 for (int i = 0; i < NC; i++) {
112 c[i] = new Component();
113 }
114
115 Vector<String> statevars = new Vector<String>();
116 Vector<StateFunction> statefunc = new Vector<
StateFunction>();
117
111
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
112
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
113
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
114
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
199
200 return sys;
201 }
202
203 public Trigger createNE() {
204 NEfitness = inDouble(NEprofitmin, NEprofitmax); 205
NEefficiencyfitness = new double[totalEAcount];
206 NESVadd = new double[totalSVcount];
207
208 for (int i = 0; i < totalEAcount; i++) { 209
NEefficiencyfitness[i] = -inDouble(0,
NEprofitefficiencypenaltymax);
210 }
211 for (int i = 0; i < totalSVcount; i++) {
212 NESVadd[i] = inDouble(0, NESVaddmax);
213 // Util.dump("totalSVcount "+totalSVcount+" : "+NESVadd [i]);
214 }
215
216 return new AlwaysTrigger(new Event() {
217 @Override
218 public void run(State state) {
219 state.val[itime]++;
220 state.val[ifitness] += NEfitness;
221 for (int i = 0; i < totalEAcount; i++) {
222 state.val[ifitness] += NEefficiencyfitness[i]
223 * state.val[totalEAindex + i];
224 }
225 for (int i = 0; i < totalSVcount; i++) {
115
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
116
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
117
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
118
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
297 }
298 Util.dump("DEtime=" + Util.toString(DEtime));
299 Util.dump("DEfitness=" + Util.toString(DEfitness));
300 Util.dump("------");
301 Util.dump("EAscale=" + Util.toString(EAscale));
302 Util.dump("EAshape=" + Util.toString(EAshape)); 303 for (int i =
0; i < EAdependency.length; i++) {
304 Util.dump("EAdependency" + i + "="
305 + Util.toString(EAdependency[i]));
306 }
307
308 }
309
310 public Component() {
311 svcount = inInt(sv0, sv1);
312
313 // Failure age
314 FAcount = inInt(FA0, FA1);
315 FAscale = new double[FAcount];
316 FAshape = new double[FAcount];
317 FAfunc = new WeibullDistribution[FAcount];
318 for (int i = 0; i < FAcount; i++) {
319 FAscale[i] = inDouble(FAscalemin * T, FAscalemax * T)
;
320 FAshape[i] = inDouble(FAshapemin, FAshapemax);
321 FAfunc[i] = new WeibullDistribution(FAscale[i],
FAshape[i]);
322 }
119
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
120
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
121
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
403 }
404
122
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
405 }
406
407 public LinearTrigger createDETrigger(final int index) {
408 DEtrigger[index] = new LinearTrigger(new Event() {
409 @Override
410 public void run(State state) {
411 for (int i = 0; i < svcount; i++) {
412 state.val[svindex + i] = state.val[svindex + i]
413 * DEmul[index][i] + DEadd[index][i];
414 }
415 state.val[itime] += DEtime[index];
416 state.val[ifitness] += DEfitness[index];
417 }
418 }, initialstate);
419 DEtrigger[index].clearMask();
420 for (int j = 0; j < svcount; j++) {
421 DEtrigger[index].setMask(svindex + j);
422 }
423 return DEtrigger[index];
424 }
425
426 public StateFunction createFAFunc(int i) {
427 return createFunc(FAdependency[i], FAfunc[i]);
428 }
429
430 public StateFunction createEAFunc(int i) {
431 return createFunc(EAdependency[i], EAfunc[i]);
432 }
123
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
433
434 public StateFunction createFunc(final double[] dependency
,
435 final WeibullDistribution func) {
436 return new StateFunction() {
437 @Override
438 public double value(State state) {
439 double age = 0;
440 for (int j = 0; j < svcount; j++) {
441 age += dependency[j] * state.val[svindex + j];
442 }
443 return func.value(age);
444 }
445
446 };
447 }
448 }
449 }
450
451
452
453
454 package com.vuebe.vhen;
455
456 import com.vuebe.ga.GA;
457 import com.vuebe.math.WeibullDistribution;
458 import com.vuebe.util.Util;
124
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
459
477 sf.EA1 = 2;
478 sf.EAscalemin = 1;
479 sf.EAscalemax = 1;
481 sf.EAshapemax = 3;
125
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
485 sf.FA1 = 2;
488 sf.FAshapemin = 1;
489 sf.FAshapemax = 2;
490 sf.FAdependencymax = 1;
491
492 sf.useFE = true;
493 sf.FEmul0 = 0;
494 sf.FEmul1 = 0;
495 sf.FEadd0 = 0;
496 sf.FEadd1 = 0;
497 sf.FEtime0 = 2;
498 sf.FEtime1 = 4;
499 sf.FEfitness0 = -5;
500 sf.FEfitness1 = -10;
501
502 sf.useDE = true;
503 sf.DEmul0 = 0;
504 sf.DEmul1 = 0;
505 sf.DEadd0 = 0;
506 sf.DEadd1 = 0;
507 sf.DEtime0 = 1;
508 sf.DEtime1 = 1;
126
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
127
References
128
REFERENCES
[11] Yogesh, J. Design and optimization of thermal systems, CRC Press Inc.
1998.
[18] Bauer. R, Genetic Algorithms and Investment Strategies, John Wiley and
Sons Inc. 1994.
129
REFERENCES
[24] Pyzdek T,The Six Sigma Handbook, McGraw-Hill. New York, USA,
2003.
[25] Misra. KB, Reliability analysis and prediction, Elsevier. Amsterdam, 1992.
130
REFERENCES
[39] LA. Zadeh, Fuzzy sets, Information and Control, Pages 338–353, 1965
[41] K. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems, Pages 87-96,
1986
[42] J. Fan , W. Xie, Subsethood measure: new definitions, Fuzzy Sets and
Systems, Pages 201-209, 1999
131
REFERENCES
[43] Kulkarni. V.G, Modeling and analysis of stochastic system, Chapman &
Hall- CRC.1995.
132