Miswefwerszfw Fbahul

Download as pdf or txt
Download as pdf or txt
You are on page 1of 132

PROJECT REPORT ON

MATHEMATICS

Submitted in partial fulfillment of the requirements for the award


of the degree of M.Sc

-
Session
2022-2024

Submitted by

Name of Student: Misbahul Islam


Course:4th semester
Enrollment No.2211A025361

1
To Whom It May Concern

I, MISBAHUL ISLAM ,Enrolment no.2211A0250361


from Course:M.Sc.Mathematics 4th semester 2024 of the
ISBM University, Naw apara (Kosmi), Block & Tehsil -
Chhura, District - Gariyaband, Chhattisgarh hereby declare
that the Project Report entitled " SOLVING COMPLEX
MAINTENANCE PLANNING OPTIMIZATION
PROBLEMS USING STOCHASTIC SIMULATION
AND MULTICRITERIA FUZZY DECISION MAKING "
is an original work and the same has not been submitted to
any other Institute for the award of any other degree
Date:05-08-2024

Signature of the Student

2
Abstract
The main goal of this project is to explore the use of stochastic simulation,
genetic algorithms, fuzzy decision making and other tools for solving complex
maintenance planning optimization problems. We use two di↵erent maintenance
activities, corrective maintenance and preventive maintenance. Since the
evaluation of specific candidate maintenance policies can take a long time to
execute and the problem of finding the optimal policy is both nonlinear and non-
convex, we propose the use of genetic algorithms (GA) for the optimization. The
main task of the GA is to find the optimal maintenance policy, which involves:
(1) the probability of breakdown, (2) calculation of the cost of corrective
maintenance, (3) calculation of the cost of preventive maintenance and (4)
calculation of ROI (Return On Investment). Another goal of this project is to
create a decision making model for multicriteria systems. To find a near-optimal
maintenance policy, we need to have an overview over the health status of the
system components. To model the health of a component we should find all the
operational criteria that a↵ect it. We also need to analyze alternative maintenance
activities in order to make the best maintenance decisions. In order to do that, the
TOPSIS method and fuzzy decision making has been used.
To evaluate the proposed methodology, internal combustion engine cooling of a
typical Scania truck was used as a case study.

Keywords: Genetic algorithm, corrective maintenance, preventive maintenance,


ROI, multi-criteria decision making, TOPSIS, fuzzy decision making, discrete
event simulation, intelligent agent

3
Contents
List of Tables 7
List of Figures 8
1 Introduction 10
1.1 Preventive Maintenance . . . . . . . . . . . . . . . . . . . . . 11
1.1.1 Application and Advantage . . . . . . . . . . . . . . . 11
1.2 Corrective Maintenance . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1 Advantage and Disadvantage . . . . . . . . . . . . . . 12
1.3 Return on Investment . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1 Life Cycle Cost . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2 Total Cost Calculation . . . . . . . . . . . . . . . . . . 14
1.4 Replacement Strategies . . . . . . . . . . . . . . . . . . . . . . 16
1.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Component health 19
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Definitions and Functions . . . . . . . . . . . . . . . . . . . . 19
2.3 Cooling System . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4 The Consequences of PM on the Component Health . . . . . . 25
2.5 Dynamic Reliability . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.1 Riccati di↵erential equation . . . . . . . . . . . . . . . 25
2.5.2 Dynamic Reliability Equation . . . . . . . . . . . . . . 27
2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Multi Criteria Fuzzy Decision Making (MCFD) 30
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Fuzzy Set and Membership Function . . . . . . . . . . . . . . 31
3.3 IFS Generalize Fuzzy Sets . . . . . . . . . . . . . . . . . . . . 31

5
3.4 Fuzzy Implication Operators . . . . . . . . . . . . . . . . . . . 32
3.5 Inclusion Degree Function of IFS ................ 32
3.6 TOPSIS Method . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.1 The Structure of TOPSIS Method . . . . . . . . . . . . 33
CONTENTS

3.6.2 TOPSIS Method in Multiple Criteria Fuzzy Decision


Making .......................... 35
3.7 Problem Statement ....................... 36
3.8 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . 38
3.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Optimization Methods 42
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2 Classical and Non-classical Optimization Methods . . . . . . . 42
4.3 Global Optimization ....................... 43
4.4 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . 45
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Genetic Algorithm 47
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Structure ............................. 47
5.3 Applications and Advantages . . . . . . . . . . . . . . . . . . 48
5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Maintenance Optimization Model 50
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.2 Simulation Framework Model . . . . . . . . . . . . . . . . . . 50
6.3 The Simulation Algorithm . . . . . . . . . . . . . . . . . . . . 52
6.4 Example Problem Class . . . . . . . . . . . . . . . . . . . . . 53

6
6.5 A Numerical Example . . . . . . . . . . . . . . . . . . . . . . 55
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 Summary, Conclusions and Future Work 61
7.1 Summary ............................. 61
7.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8 Summary of reflection of objectives in the thesis 63
8.1 Objective 1 - Knowledge and Understanding . . . . . . . . . . 63
8.2 Objective 2 -Methodological Knowledge . . . . . . . . . . . . . 64
8.3 Objective 3 - Critically and Systematically Integrate Knowledge 64
8.4 Objective 4 - Independently and Creatively Identify and Carry
out Advanced Tasks ....................... 64
8.5 Objective 5 - Present and Discuss Conclusions and Knowledge 65
8.6 Objective 7 - Scientific, Social and Ethical Aspects . . . . . . 65
A Java code - Event Triggers 66
CONTENTS

B Java code - Genetic Algorithm 71


C Java code - Simulation Classes 76
D Java code - Example Problem Class 85

List of Tables
1.1 Variable descriptions . . . . . . . . . . . . . . . . . . . . . . . 17
2.1 Idler Roller parts description . . . . . . . . . . . . . . . . . . . 23
2.2 The values in this table has been hidden by the request from
Scania . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Binary implication ........................ 32
3.2 Decision Making by TOPSIS . . . . . . . . . . . . . . . . . . . 35

7
3.3 The Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4 The alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5 The inclusion degrees of and inclusion degrees of A1 in
M1,M2 . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6 The inclusion degrees of and inclusion degrees of

3.7 The inclusion degrees D+(Mi) and D (Mi) . . . . . . . . . . . 40


6.1 Parameters defining random events. . . . . . . . . . . . . . . . 56
6.2 Parameters defining plannable events. . . . . . . . . . . . . . . 56
6.3 Parameters defining e ciency measures. ............ 56
6.4 Parameters defining the default event. ............. 56

List of Figures
1.1 Corrective maintenance function. ................ 12
1.2 Costs illustration [6] ....................... 15
1.3 An illustration of component age as a function of time with
periodic preventive replacement . . . . . . . . . . . . . . . . . 15
2.1 The bathtub curve hazard function [26] . . . . . . . . . . . . . 21
2.2 Cooling system parts for chassis types P-, G-, R-, T-series . . 22
2.3 Idler roller [28] .......................... 23

3.1 TOPSIS illustration . . . . . . . . . . . . . . . . . . . . . . . . 34


4.1 Point ’A’ shows a global optimization and the other points
indicate a local optimization [12] ................ 44
4.2 An illustration of Evolutionary algorithm ........... 46
5.1 Genetic code of the parents and the o↵spring before and after
the crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.1 The structure of a Decidor . . . . . . . . . . . . . . . . . . . . 51
6.2 Schematic view of our simulation algorithm. . . . . . . . . . . 52
6.3 Illustration of the model . . . . . . . . . . . . . . . . . . . . . 55
6.4 Results from optimization of linear decidors in the example
problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.5 Convergence of expected profit. . . . . . . . . . . . . . . . . . 58
CHAPTER 1

8
Introduction

One of the most important factors in the operations of many corporations today is
to maximize profit and one important tool to that e↵ect is the optimization of
maintenance activities. The main goals of all maintenance activities are: 1- wait
as long as possible to perform maintenance so that the amount of useful lifetime
of parts that is thrown away is minimized while avoiding failure, 2- find the
optimal maintenance policy consisting of scheduled and unscheduled
maintenance so that the life cycle cost is minimized while satisfying system
availability and safety requirements [1].

Maintenance activities is at the largest level divided into two major areas,
corrective maintenance activities (CM) and preventive maintenance activities
(PM). In this work, we define inspection and imperfect maintenance1, which can
be used as a maintenance activity in some situations. Each maintenance policy
contains various activities such as ‘replacement’, ‘minimal’ or ‘main repair’, etc.

A near-optimal maintenance policy can utilize a single activity or a combination


of two or more activities that depends on company policies and procedures. The
work reported in this thesis has been conducted at Scania, a major Swedish
automotive industry manufacturer of heavy trucks and buses. According to
Scania’s policy, there is no repair strategy for many components and replacement
is an acceptable strategy for both corrective and preventive maintenance.

The optimization of the maintenance activities is in large a↵ected by their financial


implications for a specific corporation, where given two equivalent systems
(mechanical or otherwise) under similar operations may require two quite di↵erent
maintenance policies for two di↵erent corporations.

1 is a standard maintenance which reduces failure intensity but does not leave the system
as good as new [45]

9
CHAPTER 1. INTRODUCTION
Reducing the total cost of ownership for vehicles, maintenance cost analysis and
the resulting decisions becomes the subjects of scrutiny today [1]. A Maintenance
Decision Support System (MDSS) must satisfy an extensive cost benefit analysis.
With accurate information about vehicle health state and failure prediction
capabilities we can make a more informed decision process.

In this chapter we define some engineering and economic concepts and also
analyze various strategies.

1.1 Preventive Maintenance

Definition 1. Preventive maintenance corresponds to a type of planned


maintenance that improves remaining useful life for a component by preventing
excess depreciation and impairment [2].

The main goal with PM is avoiding or mitigating a breakdown in the system. PM


includes tests, measurements, adjustments, cleaning, lubrication, minimal repairs,
main repair and part replacements for avoiding component failure. PM has a
flexible structure and is not limited to the above activities
[3].

1.1.1 Application and Advantage

The most important application of using PM is energy optimization. We


summarize some other advantages of Preventive Maintenance as:

• Increasing the e ciency of equipment

• Extending the remaining useful life

• Increasing the system performance

10
CHAPTER 1. INTRODUCTION
• Increasing customer service because maintenance teams have less un-
planned maintenance and can respond quicker to new problems [2]

Moreover, PM measures increased overall safety levels and reduce insurance


inventories.

1.2 Corrective Maintenance

Definition 2. Corrective maintenance corresponds to a maintenance type with


di↵erent subtasks such as identify, isolate, and rectify a failure so that the failed
component can be restored to an operational condition within the tolerances for in
service operations [4].

In Figure 1.1 we summarize the function of CM:

Figure 1.1: Corrective maintenance function.

11
CHAPTER 1. INTRODUCTION
1.2.1 Advantage and Disadvantage

With corrective maintenance we improve product quality, increase component


lifetime and increase safety.

Higher investment in diagnostic equipment and training is a disadvantage of CM


[5].

1.3 Return on Investment

Definition 3. Return on investment denotes the benefit derived from having spent
money for a system (or product) development, change and management. ROI is
also a performance measure used to evaluate the e ciency of an investment
opportunity[1].

ROI is calculated by:

Return - Investment
ROI = (1.1)
Investment

The ’Return - Investment’ in the denominator is the loss or gain realized by


making the investment [1].

For maintenance purposes ROI is defined as:

ROIM = C H CHM
(1.2)

IHM

where

12
CHAPTER 1. INTRODUCTION
• CH is the life-cycle cost of the system when managed using unscheduled
maintenance
• CHM is the life-cycle cost of the system when managed using a health
management (HM) approach
• IHM is the investment in HM

In this work CH is to the total life cycle costs of a system using corrective
maintenance and CHM is the total life cycle costs with preventive maintenance.

If we assume IHM = 1, then ROI simplifies to:

ROIM = CH CHM (1.3)

Our goal is to maximize ROI. To obtain this we want to make CHM as small as
possible, of course CH is fixed for a particular system.

1.3.1 Life Cycle Cost

The life-cycle cost (LCC) can be divided into eight parts,

LCC = Cini + Cins + Ce + Co + Cmr + Csd + Cenv + Cdd,

where

• Cini : initial cost (purchase price of the component and all items unique to
that component)
• Cins : installation cost (shipping cost, rigging, installation and start up of
the component)
• Ce : energy cost (predicted energy cost for system operation)

• Co : operation cost (labor cost of normal system supervision)

13
CHAPTER 1. INTRODUCTION
• Cmr : maintenance and repair cost (include both routine and predicted
repairs)
• Csd : downtime cost (loss of production cost)

• Cenv : environmental cost

• Cdd is the decommissioning and disposal cost (include disposal of


component, associated equipment and site restoration)

1.3.2 Total Cost Calculation

We calculate total maintenance cost G at time t by

G(t) = K + C(t) – R(t) (1.4)

where K is the replacement cost, C(t) is the maintenance cost and R(t) is rescue
cost [7].

A preventive replacement applicability is an appropriate decision if and only if a


component has negligible failure rate and if the preventive replacement cost is
cheaper than the corrective maintenance cost.

In Fig. 1.2, we illustrate every maintenance cost per time.

14
CHAPTER 1. INTRODUCTION

Figure 1.2: Costs illustration [6]

The maintenance cost indicates preventive maintenance which increases with


time, that is, more PM. The breakdown cost relates to corrective maintenance
which decreases with time because of the increased PM. As Figure 1.2 shows there
is an optimum point when preventive maintenance should be performed when
maintenance cost and breakdown cost are equal, which is point A in this case [7].

15
CHAPTER 1. INTRODUCTION
Figure 1.3: An illustration of component age as a function of time with periodic
preventive replacement

In Figure 1.3, the component gets older with time, the component age is reset upon
each preventive replacement.

1.4 Replacement Strategies

As we mentioned before, there are two maintenance activities: corrective


maintenance and preventive maintenance.

Barlow and Hunter examined optimal use of preventive maintenance in their


model in 1960 [8]. We summarize Barlow and Hunter model as:

, (1.5)

where

f(t) is density function for F(t)

F(t) is error probable function for the component

Ck indicates to corrective maintenance

Cf indicates to preventive maintenance

Cf < Ck

Although Eq. 1.5 provides a simple solution for optimization of fix interval
replacement times, the assumptions in the construction of the underlying model

16
CHAPTER 1. INTRODUCTION
are actually quite limited and subsequently fails to capture many real world
problems in su cient detail.

Wang classified Barlow and Hunter’s model in 2002 and introduced three
replacement strategies which are: minimal repair, imperfect reparation and perfect
reparation [9].

In this section we review di↵erent strategies for replacement and select one
replacement strategy for our project.

The variables in table 1.1 are used in our set of possible maintenance strategies.

Variables Description

CPM Preventive maintenance cost

CCM Corrective maintenance cost

µ Mean failure time

tp,2tp,3tp,... Planned time for replacement


Table 1.1: Variable descriptions

• Strategy 1:

Component replacement occurs when a component fails, there is no

preventive maintenance for this strategy. We estimate failures up to time


T. The maintenance cost here depends solely on corrective maintenance
cost, we formulate the strategy as

• Strategy 2:

17
CHAPTER 1. INTRODUCTION
In this strategy, we replace the component at planned time tp regardless of
the component age. We also replace a component whenever it breaks down.
This strategy can be formulated as

where H(tp) indicate the failure replacement numbers in the time interval
(0,tp). To determine H(tp) we used renewal theory [43]. To obtain the above
equation, Chapman and Hall used the probability density function (PDF)
for the first failure, then they took inverse Laplace transform of the
function.

• Strategy 3:

The replacement occurs in this strategy if and only if the component age
comes to the planned times, we replace also the component if we have a
breakdown as usual, we determine the cost as:

That ⇢(t) indicates the density function and P(tp) is a probability of failure
before time tp.

If we assume the same situation as strategy 2 we are able to calculate downtime.


Assume Tp is the downtime that occur during a planned replacement and Tf the
downtime for replacement due to failure then:

Downtime

With the same assumption as in strategy 3 we calculate a new downtime as:

18
CHAPTER 1. INTRODUCTION

Downtime

We accept strategy 2 applied to the component level as a better starting point in


this work. In chapter 6 we determine tp and calculate maintenance costs and also
ROI by using Eq. 1.3 and Eq. 1.4.

1.5 Conclusions

In this chapter we introduced some definitions in the maintenance concept.We


also described some economics terms such as ‘Return on Investment’ and
introduced some formula to calculate it. We are going to select the best
maintenance activity for our problem based on maximizing profit in chapter
3.

In subsection 1.3.2 we illustrated di↵erent maintenance cost and suggested some


optimal point for a preventive maintenance plan. In section 1.4 we introduced and
analyzed various strategies for replacement and selected strategy 2 for our
problem according to Scania’s policy. We are going to introduce ‘component
health’ as a new engineering concept in the next chapter and trying to find the
consequences of preventive maintenance on the system performance.

As future work for ’replacement strategy’ part of this project we are going to
examine strategy 3 and compare the result in the di↵erent cases.

19
CHAPTER 2
Component health

2.1 Introduction

Component health status or component health is an interpretation of ’Reliability’,


R(t), which we use in this project. As we mentioned earlier, with a preventive
maintenance, we are able to maximize ROI and also minimize potential risks for
breakdown. In this chapter we calculate the system reliability.

2.2 Definitions and Functions

• Failure rate (time unit, it can be expressed as [22]:) is a frequency which


indicates component failures per

(2.1)

where r: number of
failures
D : numbers of components tested

H : test hours per component

Af is the acceleration factor derived from the Arrhenius equation that can be
calculated by [24]:

(2.2)

where

20
CHAPTER 2. COMPONENT HEALTH

E: Activation energy of the failure mode kB:


Boltzmann’s Constant=8.617 ⇥ 10 5J/K

Tu: Use temperature

Tt is the test Temperature

• Mean Time To Failure (✓) or MTTF is a standard industry value which


shows the average time to failure. MTTF calculate by [23]:
1
MTTF = (2.3)

• Mean Time Between Failure (MTBF) is an expected time between failures


of a system during operation, which can be calculate by [23]:

MTBF = MTTF + MTTR (2.4)

where MTTR is the Mean Time to Repair.

• Hazard function h(t) is a calculation of failure rate over time interval (t2 t1)
• Failure rate function (t) shows the number of failures per unit of time and
it is related to Hazard function that its plot over time has the same shape
[25].

(2.5)

where f(t) indicate time to first failure [27] and R(t) = 1 f(t) is reliability
function.

• Degraded factor (A) The amount multiplied by mean time between failures
of a component to get the operational MTBF [27].
The ”Bathtub curve” Figure 2.1 is a well known curve used in reliability
engineering and illustrates a particular form of the hazard function.

21
CHAPTER 2. COMPONENT HEALTH

The bathtub curve includes three phases [27]:

1. Failures phase: decreasing failure rate

2. Phase with constant (random) failure rate: constant failure rate

3. Wear-out failures phase: increasing failure rate

Figure 2.1: The bathtub curve hazard function [26]

As The bathtub curve represents Mean Time To Failure is in phase with constant
failure rate that shows the predicted elapsed time between inherent failures of a
system during operation [27].

As we mentioned earlier, internal combustion engine cooling (cooling system)


was used as a case study in this project. We analyzed the performance of cooling
system in various areas.

22
CHAPTER 2. COMPONENT HEALTH

To get a better understanding about the system health status, we calculate failure
rate, mean time between failure and time to fist failure by using equations Eq. 2.1,
Eq. 2.3 and Eq. 2.4.

2.3 Cooling System

Cooling system is an important system in the trucks engine. The system task is
cooling continuously the engine by circulating coolant liquid 2.

The cooling system consists of various parts such as: coolant, Idle roller,
compressor, belt tensioner, axeltapp, generator and ploy-V belt.

Figure 2.2 shows cooling systems parts in the typical Scania trucks (P-,G-,Rand
T-series).

2 a mixture of an antifreeze and water

23
CHAPTER 2. COMPONENT HEALTH

Figure 2.2: Cooling system parts for chassis types P-, G-, R-, T-series

To avoid lengthy calculations and to get a better analysis for the health of the
cooling system we chose the idler roller as a representative subcomponent.

Idler roller consists of 10 parts:

Figure 2.3: Idler roller [28]

where:

Item Description Quantity

1 Shell 1

2 Bearing Housing 2

3 Shaft 1

4 Inner Snap Ring 2

5 Bearing 2

6 Female Labyrinth 2
Seal

7 Male Labyrinth Seal 2

8 Outer Labyrinth Seal 2

9 Outer Snap Ring 2

10 Cover 2

24
CHAPTER 2. COMPONENT HEALTH

Table 2.1: Idler Roller parts description

As our study shows ’Bearing’ (part number 5) is the most vulnerable part for
failure.

Table 2.2 in the next page represents failure rate,TTFF, MTBF and time to first
failure for the idler roller, the values calculated by Eq. 2.1, Eq. 2.3 and Eq. 2.4. To
compute the Idler Roller health status we used the Scania workshop manual and
warranty statistics (survival analysis).

Notation: The values in Table 2.2 has been hidden according to Scania’s security
policy for confidential data.

25
CHAPTER 2. COMPONENT HEALTH

26
CHAPTER 2. COMPONENT HEALTH

27
CHAPTER 2. COMPONENT HEALTH

2.4 The Consequences of PM on the Component Health

In this section we calculate the consequences of PM on a mechanical system which


increases system reliability and e ciently.

We redefine the Hazard rate function as new function of reliability as [29]:

(2.6)

where h(t) and R(t) denote hazard rate function and reliability function
respectively. According to ‘The dynamic reliability models for fatigue crack
growth problem’ see [30], we can rewrite hazard function as:

h(t) = 0+ A(R0 R(t)) (2.7)

where 0 and A represent initial failure rate and degraded factor respectively and R0
is the initial reliability.

2.5 Dynamic Reliability

Definition 4. Dynamic reliability method provides a mathematical framework


capable of handling interactions among components and process variables
explicitly [31].

In other words, the dynamic reliability method represents a more realistic image
of the system, ability, risk and also safety.

In this section, we combine Eq. 2.6 and Eq. 2.7 to define a new equation for
dynamic reliability in a mechanical system. To obtain this new equation, we use
Riccati di↵erential equation.

28
CHAPTER 2. COMPONENT HEALTH

2.5.1 Riccati di↵erential equation

Definition 5. The Riccati3 equation is a nonlinear first order di↵erential equation


which is not in the group of classical equations [32].

Riccati equation appears in the di↵erent areas of mathematics such as the theory
of conformal mapping [33], algebra and geometry.

A general form for Riccati equation can be written as

) (2.8)

If r(x) = 0 then Riccati di↵erential equation transfers to Bernoulli’s


principle(di↵erential equation).

If r(x) = 06 , and if we accept u(x) as a solution for the di↵erential equation then:

(2.9)

di↵erentiating in Eq. 2.9 with respect to x:

substitute this into Eq. 2.8

3 The equation is named after Jacopo Francesco Riccati (1676–1754)

29
CHAPTER 2. COMPONENT HEALTH

) (2.10)

then:

) (2.11)

as we see Eq. 2.11 is a linear di↵erential equation.

2.5.2 Dynamic Reliability Equation

By using Riccati di↵erential equation, we are able to create a general form for
dynamic reliability based on R(t). To obtain this we find R(t) from Eq. 2.7:

(2.12)

30
CHAPTER 2. COMPONENT HEALTH

we substitute Eq. 2.12 into Eq. 2.6:

) (2.13)

where Eq. 2.16 is a Riccati di↵erential equation based on h(t):

(2.14)

we define h1(t) as a particular solution for the problem:

h1(t) = ( 0 + AR0)

then

(2.15)

to solve Eq. 2.15, we need to define an acceptable solution such as u which


satisfies the linear 2nd order ODE:

according to Eq. 2.12 we have:

31
CHAPTER 2. COMPONENT HEALTH

we consider:

then:

(2.16)

where R0 is initial reliability, 0 is initial failure rate which can be calculated by Eq.
2.1 and A indicate the degraded factor that can be find by fitting Eq. 2.16 with the
experimental data or simulation result. By using Eq. 2.16 the health promotion of
a mechanical system can be calculated [34].

Eq. 2.16 introduced by Yuo-Tern Tsai and Kuo-Shong Wan in April 2004 (see [35]
, page 91) as a dynamic reliability equation that depicts the degraded behavior of
component. The system reliability shows how far we are getting the particular
outcome for the given input with as much less wastage as possible. As future work

32
CHAPTER 2. COMPONENT HEALTH

we are going to use Eq. 2.16 for calculate the system reliability after every
preventive, corrective maintenance and also inspection.

By using this approach we are able to determine the consequences of di↵erent


maintenance policies on the system health. This approach helps us to develop our
model and gives us also a better visibility of the health of a system.

2.6 Conclusions

In this chapter we introduced some new concepts such as failure rate, time to first
failure, etc. to get better understanding about the health of a component. We
selected cooling system as an example and calculated failure rate, time to first
failure, mean time to failure and mean time between failures for the idle roller
which is a subcomponent in cooling system.

In section 2.4 we analyzed the consequences of a preventive maintenance on the


component health by using Hazard rate function and reliability function. In
subsection 2.5.1 we introduced Riccati di↵erential equation and used it in
subsection 2.5.2 to find a new formula to calculate the health promotion of a
mechanical system.

As future work for this part of this project we are going to simulate the system
reliability for a Multi-component system.

In the next chapter we are going to find the best maintenance activity for cooling
system.

33
CHAPTER 3
Multi Criteria Fuzzy Decision Making (MCFD)

3.1 Introduction

In this chapter, we provide a decision making model for a mechanical system


based on maintenance activities and return on investment. We use the health of
system as an indicator to measure the systems performance.

The health of a system depends on the health of all the components that make up
the system. Since, there are various criteria that a↵ect the health on a mechanical
system, our decision making process become a multi-criteria decision making. We
consider the health of a system to be between 0 and 1 in this project. By health
equal to 0 we mean that the system fails and a health equal to 1 indicates a fully
healthy system. By this consideration, we are able to formulate our decision
making problem in a fuzzy environment.

There are di↵erent multi-criteria decision making techniques such as: AHP (The
Analytical Hierarchy Process),TOPSIS (The Technique for Order of Preference
by Similarity to Ideal Solution), SAW(Simple Additive Weighting), ELECTRE
(Elimination er Choice Translation Reality), SMART(The Simple Multi Attribute
Rating Technique) and ANP(The Analytical Network Process) for the problem
solving. In this project we use TOPSIS technique for decision making model
under fuzzy environment.

To reach this goal we need to identify all criteria that a↵ect the health of the
components. Some of these criteria are completely quantifiable, some partially
quantifiable, and others criteria are completely subjective. We need also to define
di↵erent maintenance activities, such as preventive, corrective, inspective and
imperfect maintenance.

34
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

The result from this chapter shows the best maintenance activities to perform on
each component, the policy of when to perform these activities is in fact the main
output from the simulation and optimization outlined in the next chapter.

3.2 Fuzzy Set and Membership Function

A membership function indicates the degree of truth as an extension of evaluation.


This concept was introduced by Zadeh in 1965. Fuzzy truth represents
membership in vaguely defined sets. Some basic definitions of fuzzy sets,
membership function and intuitionistic fuzzy sets are reviewed by Yun Shi [36] ,
KERRE [37] and Yang[38].

Definition 6. A fuzzy set is a pair (A,m) where A is a set and m : A ! [0,1]. For each
x 2 A,m(x) is called the grade of membership of x in (A,m). For a finite set A =
{x1,...,xn}, the fuzzy set (A,m) is often denoted by {m(x1)/x1,...,m(xn)/xn}. Let x 2
A. Then x is called fully included in the fuzzy set (A,m) if m(x) = 1 and is called
not included if m(x) = 0. The set {x 2 A|m(x) > 0} is called the support of (A,m)
and the set is called a kernel. x is a fuzzy member if 0 < m(x) < 1, [38].
Definition 7. For any set X a membership function on X is any function from X to
the real unit interval [0,1], the membership function which represents a fuzzy set
A is denoted by µA. For an element x of X, the value µA(x) is called the membership
degree of x in the fuzzy set A, [39].

According to [40] we are able to model unknown information by using an


additional degree and Intuitionistic fuzzy sets (IFS)

35
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

3.3 IFS Generalize Fuzzy Sets

Definition 8. An Intuitionistic Fuzzy Set A on a universe U is defined as an object


of the following form:
A = {(u,µA(u),⌫A(u)) | u 2 U}, where the functions uA : U ! [0,1] and vA : U ! [0,1]
define the degree of membership and the degree of nonmembership of the element
u 2 U in A, respectively, and for every u 2 U we have 0 µA(u) + ⌫A(u) 1,
[41].

According to [40] a fuzzy set can be written as:

{(u,µA(u),1 µA(u)) | u 2 U} (3.1)

IFS distribute fuzzy sets for every membership function µ and non-membership
functions ⌫ where ⌫ = 1 µ.

3.4 Fuzzy Implication Operators

The following table summarizes the classical binary implication:

a b a!b

0 0 1

0 1 1

1 0 0

1 1 1
Table 3.1: Binary implication

Definition 9. A mapping I : [0,1]2 * [0,1] is a fuzzy implication if it satisfies the


boundary conditions:

36
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

I(0,0) = I(0,1) = I(1,1) and I(1,0) = 0, [36].

A fuzzy implication can be generated by using three di↵erent approaches, R-


implications, S-implications and QL-implications. In the present paper we use R-
implications.

3.5 Inclusion Degree Function of IFS

Assume U is a finite universe and R is an implication. IIFS is a an inclusion degree


function of IFS if R satisfies the following conditions [36]:

• 8a,b 2 [0,1] and a b ) R(a,b) = 1

• R(a,b) is non-decreasing with respect to b and non-increasing with respect


to a.

By using this definition we can write

,
(3.2)

where | U | is the cardinality of U which can be calculated by, [42],

. (3.3)

There are di↵erent methods to calculate an R-implication which was introduced


by several mathematicians. we use Lukasiewics implication:

RL(a,b) = min(1 a + b,1). (3.4)

37
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

3.6 TOPSIS Method


The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)
is an analysis method that is one of the best methods for multi criteria decision
making. TOPSIS was developed by Hwang and Yoon in 1981 and also by Yoon
in 1987.

TOPSIS method is based on two main solutions: 1- The positive ideal solution
which has the best attribute values 2- The negative ideal solution which has the
worst attribute values.

TOPSIS measures the geometric distance between all alternatives, positive and
negative ideal solutions and selects the best one. The best alternative is an
alternative which has the shortest distance from positive ideal solution and also
the farthest distance from the negative ideal solution [44].

3.6.1 The Structure of TOPSIS Method

TOPSIS method consists of six steps. We assume a decision making problem that
has m alternatives and n criteria. The steps can be performed as follow:
Step 1 :
Obtain m alternatives and n criteria, create the evaluation matrix with m rows and n
columns. Obtain the intersections of alternatives and a criteria, define it as xij,
standardization xij as matrix (xij)m⇥n.

Step 2 :
Create a set of weight for the criteria, define it as wn, normalize (xij)m⇥n.

Step 3 :
Identify the positive ideal solution, show it as A+ Identify the
negative ideal solution show it as A .
Step 4 :

38
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

Measure the distance between all criteria and A+, define it as: D+ Measure the
distance between all criteria and A , define it as: D .
Step 5 :
Determine the ranking index ( pi) of each alternative, calculate pi by:

Step 6 :
Order the rank of alternatives according to proportion in step 5.

To illustrate the TOPSIS method, we assume a problem with 5 criteria and


3 alternatives:

C1 C2 C3 C4 C5

M1 M2 M3

Decision
Figure3.1:TOPSISillustration

As we see, every criterion a↵ects every single alternative. With a set of alternatives
and criteria, the decision maker can make for example three decisions as follow:

Criteria Alternatives Decision 1 Decision 2 Decision 3

C1 M1,M2,M3 M2,M1 M1 M3

C2 M1,M2,M3 M3 M1,M2 M2,M3

C3 M1,M2,M3 M1,M3 M3 M1

39
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

C4 M1,M2,M3 M3,M2 M2 M2

C5 M1,M2,M3 M2 M2 M3
Table 3.2: Decision Making by TOPSIS

Notation: A best decision at a time can be a single alternative or a combination of


two or more alternatives.

3.6.2 TOPSIS Method in Multiple Criteria Fuzzy De-

cision Making

Since we consider our problem to be a multi criteria decision problem in a fuzzy


environment we define A+f as a Fuzzy Positive Ideal Solution and Af as a Fuzzy
Negative Ideal Solution.

We now use the TOPSIS method to calculate the distance between A+f and Af .
Assume that we have a set of criteria C and a set of alternatives M:

C = {C1,C2,...,Cm}

M = {M1,M2,...,Mn}

According to [40] we assume that the alternatives and criteria are represented
(using IFS) as:

M1 = {(C1,µ1,1,⌫1,1),(C2,µ1,2,⌫1,2),...,(Cm,µ1,m,⌫1,m)}

M2 = {(C1,µ2,1,⌫2,1),(C2,µ2,2,⌫2,2),...,(Cm,µ2,m,⌫2,m)}

...

40
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

Mn = {(C1,µn,1,⌫n,1),(C2,µn,2,⌫n,2),...,(Cm,µn,m,⌫n,m)},

where µi,j indicates the degree by which the alternative Mi satisfies criterion Cj,
⌫i,j indicates the degree by which the alternative Mi does not satisfy criterion Cj.

Definition 10. A fuzzy positive ideal solution is defined as

A +f = {(C1,Max{µi,1},min{⌫i,1}),

(C2,Max{µi,2},min{⌫2,m}),

...

(Cm,Max{µi,m},min{⌫i,m})}.

Definition 11. A fuzzy negative ideal solution is defined as

Af = {(C1,min{µi,1},Max{⌫i,1}),

(C2,min{µi,2},Max{⌫i,2}),

...

(Cm,min{µi,m},Max{⌫i,m})}.

To calculated the distance between alternatives A+f and Af we define two inclusion
degrees as follows:
Definition 12. The inclusion degree D+(Mi) of the positively ideal solution in
alternative Mi and the inclusion degree D (Mi) of the negatively ideal solution in
alternative Mi are respectively defined as

D+(Mi) = Max(I(A+f ,Mi)) (3.5)

41
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

D (Mi) = min(I(Mi,Af )), (3.6)

where I denotes the inclusion degree function, see Equation (2). Definition 13.
The ranking index of alternative Mi is defined as

(3.7)

where 0 pi 1.

If there exists i0 2 {1,2,...,n} such that pi0 = Max{p1,p2,...,pn} then the alternative
Mi0 is the best alternative, [40].

3.7 Problem Statement

To use MCFD for solving our problem we need to identify all criteria that e↵ect
on the component’s health.To perform this process, we analyzed various
components in Scania’s technical module system with the help of engineers at
Scania R&D. In table 3.3 we classify the 15 most important criteria with high
e↵ectivity on a mechanical system.
Number Description
C1 Calendar time
C2 Mileage
C3 Chassis load and strength
C4 Material operation
C5 Components health status
C6 Humidity
C7 Temperature
C8 Quality of roads

42
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

C9 Road dust
C10 Usage
C11 Fuel quality
C12 Driving styles
C13 Environment
C14 Speed
C15 Transport tasks
Table 3.3: The Criteria

we categorize di↵erent maintenance policies in table 3.4, with ‘No Action’ we


mean no maintenance activity should be run at some special time-intervals. For
example for a component in the end of its remaining useful life ‘No Action’ is an
optimal decision.
Number Description
M1 Corrective Maintenance
M2 Imperfect Maintenance
M3 Preventive Maintenance
M4 Inspection
M5 No Action
Table 3.4: The alternatives

3.8 A Numerical Example

To have a better understanding of MCFD, we analyse internal combustion engine


cooling (cooling system for short) as a real world example. In this section, we try
to find the best maintenance activity for the cooling system in a typical engine.

43
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

As we mentioned in the last section, we need to define di↵erent alternatives and


also identify all criteria which have a direct e↵ect on the health of a component.

To define the maintenance activities, we need to study the company policies, the
customer’s perspective and requirements, which depend on the company’s task
operating systems.

Let us assume that M1,M2,M3 are three maintenance alternatives which indicates
corrective maintenance, imperfect maintenance and preventive maintenance
respectively.

To identify the criteria with the highest influence we need knowledge of the
mechanical properties of the component. We use table 3.3 to choose related criteria
with the highest impact on the cooling system’s health.

Let C1,C2,C3 and C4 be the criteria that represent mileage, temperature, time and
humidity.

As a decision maker, we want to find which of the alternatives Mi that best satisfy
the criteria C1 and C2 or just C3, according to the customers perspective and the
company’s policies.

Suppose that the relationships between alternatives and criteria are:

M1 = {(C1,(0.5,0.6)),(C2,(0.5,0.1)),(C3,(0.2,0.4)),(C4,(0.1,0.5))} M2 =
{(C1,(0.5,0.6)),(C2,(0.5,0)),(C3,(0.3,0.6)),(C4,(0.5,0.2))}
M3 = {(C1,(0.6,0.2)),(C2,(0.4,0.3)),(C3,(0.2,0.3)),(C4,(0.4,0.1))}

To find the above values, we studied Scania’s survival analysis - warranty data. In
this study we compared the cooling system e ciency in the di↵erent regions.

To estimate the exact coe cients for these relationships we need to perform an
accurate data mining with some suitable tool such as RapidMiner.

44
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

Now we can construct the positive and negative ideal solutions:

A+1+ == {{((CC13,,(0(0..63,,00..2))3)),}(C2,(0.5,0))}

A2

A1 = {(C1,(0.5,0.6)),(C2,(0.4,0.3))}

A2 = {(C3,(0.2,0.6))}

where the fist elements in the and are maximum values and the second
elements are minimum values. It means that a positive ideal solution is a set of
elements that have maximum values and a negative ideal solution is a set of least
value.

We then calculate the inclusion degree function by using Eq. 3.2, but before that
we need to calculate RL by using Eq. 3.4 (Lukasiewicz implication):

C1

RL(µA1+,µM1) = minz (1 0}.|6 + 0.5,1) = 0{ .9 ⇥ 0.5 =


0.45 | RL(⌫M1,⌫A+1 ) = min(1 0.6 + 0.5,1) =
0.6 ⇥ (1 0.5) = 0.3 {Cz2 }
C3

RL(µA1+,µM1) = minz (1 0}.|5 + 0.5,1) = 1{ ⇥ 0.5 = 0.5

RL(⌫M1,⌫A1+) = min(1 0.1 + 0,1) = 0.9 ⇥ (1 0.5) = 0.45 |


{Cz4 }

Note that is an optimal value between 0 and 1, we determine = 0.5 in this example
and |U| is the cardinality of U which is |U| = 2.

45
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

By using Eq. 3.2 we have:

then:
M11 M21 M31

0.85 0.825 0.9

I(Mi1,A1 ) 0.925 0.9 0.875


Table 3.5: The inclusion degrees of and inclusion degrees of A1 in
M1,M2
and also:

M11 M22 M32

0.45 0.425 0.475

I(Mi2,A2 ) 0.45 0.475 0.425


Table 3.6: The inclusion degrees of and inclusion degrees of

By using Eq. 3.5 and Eq. 3.6 we can calculate the inclusion degrees D+(Mi) and D
(Mi):

D+(Mi) 0.85 0.825 0.9

D (Mi) 0.45 0.475 0.425


Table 3.7: The inclusion degrees D+(Mi) and D (Mi)

The ranking index of alternatives (pi) can be calculated by using Eq. 3.7 as:

46
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

As we see p3 = 0.679 is the best alternative and indicates preventive maintenance


in this case.

3.9 Conclusions

In this chapter we define our problem in fuzzy environment and used the TOPSIS
method for the decision making. As we mentioned earlier the most advantage of
using fuzzy logic is flexibility.

We started this chapter with an introduction of fuzzy logic. We defined some


concepts in fuzzy environment in sections 3.2, 3.3, 3.4 and also section 3.5. We
fined the TOPSIS method as a suitable decision making method for our problem
and described the structure of this method in subsection 3.6.1.

By define a positive and negative ideal solution in subsection 3.6.2 we estimated


the best and worst case for our problem. In fact we sketched a fuzzy frame for the
problem and our goal is to have a long distance from the worst case (negative ideal
solution) and rise up to catch the best case (positive ideal solution).

We identified fifteen criteria with direct e↵ect on the cooling system’s health and
defined some alternative as maintenance activities in section 3.7.

47
CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)

As future work for the decision making part of this project we are going to
examine some other multiple-criteria decision methods such as: value analysis
(VA) fuzzy, VIKOR method and weighted product model.

Until now we defined di↵erent maintenance activities, analyze the system health
and making decision to choose the best alternative. In the next chapter we
analyzed di↵erent optimization methods to find a suitable algorithm for
optimization.

48
CHAPTER 4
Optimization Methods

4.1 Introduction

In the decision making problems, it is normal to have multiple design objectives


and there exists di↵erent optimization methods today for solving this kind of
problems.

Most of the optimization algorithms and methods have been developed and
improved from their previous versions. As discussed in [10] the solutions that
generated by most of the optimization algorithms, cannot be used for problems
that have a combination of discrete and continuous variables. Those algorithms
mostly provide local optimization as final solutions.

In this chapter, we will look at di↵erent ways to solve multi objective design
optimization problems and also find the best optimization algorithm to solve our
problem that provides a global optimization.

4.2 Classical and Non-classical Optimization Methods

We classify the optimization problems in consideration of objective functions and


constraints into four groups:

1. Unconstrained Single - Objective Optimization (USOOP)

2. Constrained Single - Objective Optimization (SOOP)

3. Unconstrained Multi - Objective Optimization (UCMOP)

4. Constrained Multi - Objective Optimization (CMOP)

49
CHAPTER 4. OPTIMIZATION METHODS

There are two approaches to solve this problem: classical optimization techniques
and intelligent(Non-classical) optimization techniques.

A classical optimization method is an analytical method that solves di↵erentiable


functions. This method is e cient when the underlying assumptions are fulfilled
[11]. The classical optimization techniques don’t support non di↵erentiable
optimization problem. This method cannot solve a large scale problem and it is
also sensitive to changes the parameters, which is one potential disadvantage of
using classical optimization method. Trust region method and Interior point
method are two well known classical optimization methods.

The intelligent (Non classical) optimization method has been specifically


developed for those cases where the classical method was not suitable, high
dimensional search or problems with many local optimizations. Since the
intelligent method investigates all possible solutions, the numbers of evaluations
can be very high and therefore this method is applied in connection with computer
experiments[13]. The intelligent optimization method is also able to find the
optimum solution for a CMOP. Penalty function method, Resource allocation
optimization methods, Multi-objective method and Co-evolutionary method are
some well known intelligent optimization methods.

4.3 Global Optimization

Global optimization, per definition, indicates to finding the extreme value of a


given non convex function in a certain feasible region [13]. In most cases, the
classical optimization methods are not able to solve the global optimization
problems, because this methods usually entrap in a local optimization. Moreover,
classical methods could neither generate nor use the global information that
needed to find the global solution.

50
CHAPTER 4. OPTIMIZATION METHODS

To get a better understanding of local and global optimization, we analyze Figure


4.1 in follow:

Figure 4.1: Point ’A’ shows a global optimization and the other points indicate a local
optimization [12]

Point B refers to a local optimization, because it’s the highest place in their area.
In other words, the peak of mountain always indicates a local optimization[13].
As we see in Figure 4.1, there are several mountains; the highest mountain always
is the global optimization, which is point A in this example.

At find the global optimum point is the challenge in many cases. Metaheuristic
algorithms are new solvers which designed to find the global optimum for

51
CHAPTER 4. OPTIMIZATION METHODS

optimization problems. Metaheuristics algorithms are able to implement a


stochastic optimization. Compared to other optimization algorithms,
metaheuristics algorithms do not assure that a globally optimal solution can be
found on the all class of optimization problems. The final solutions that provided
by metaheuristics algorithms, are dependent on the set of random variables [14].

Metaheuristics algorithms can be classified by di↵erent approaches, for example


population-based searches or categorized by the type of search strategy.

In this project, we review Metaheuristics of population-based searches type, to


find a suitable optimization algorithm for solving our problem. Population based
metaheuristics include evolutionary algorithms, particle swarm optimization
(PSO), Imperialist Competitive Algorithm (ICA) and Intelligent Water Drops
algorithm (IWD) [15].

4.4 Evolutionary Algorithms

As we mentioned before, evolutionary algorithms (EA) are a subset of


populationbased metaheuristic that inspired by biological evolutionary mechanism in
nature, such as reproduction, mutation, recombination, and selection [16].

We summarize EA’s functions in two processes (algorithms) that work


simultaneously:

1. Evaluation

2. Optimization

To get a better understanding of EA’s functions, we analyse an example:

52
CHAPTER 4. OPTIMIZATION METHODS

min f(x) 9>>=>>;

s.t. !

g(x) go

EA tries to optimize the objective function and simultaneously tries to find a


feasible set for the solutions. In fact, we are going to find the minimum value for
f(x) and also find the value of ↵. We illustrate the process as:

In fact, for solve this problem; we need to run two algorithms parallel with
di↵erent tasks. The first algorithm’s task is finding the value of ↵, and the second
algorithm trying to optimizes the objective function f(x).

As we see in Figure 4.2, Algorithm 1 finds some ↵, that can be assumed as


↵1...↵2...↵m.

Algorithm 2 uses ↵ and represents a set of solution such as:

Algorithm 1 recognizes the best ↵ and transforms every ↵. After the


transformation, algorithm 2 produces new set of x⇤.

↵1 Alg 21 x⇤
1

Alg 1 ↵2 Alg 22 x 2⇤

... ↵m Alg 2m x⇤
m

Figure 4.2: An illustration of Evolutionary algorithm

53
CHAPTER 4. OPTIMIZATION METHODS

Note: for recognize the best ↵ by algorithm 1, an optimization process should be


performed every time.

In fact, algorithm 1 is a ’Meta-Algorithm’ (external) that consists of various


members (↵) and every members in algorithm 1 calls algorithm 2.

Genetic algorithm (GA), artificial bee colony algorithm, ant colony optimization
algorithms, evolution strategy (ES) and imperialist competitive algorithm are
some examples of evolutionary algorithms.

4.5 Conclusions

According to our problem’s conditions we need to find a robust and fast


optimization algorithm for problem solving. As we reviewed in this chapter the
classical optimization algorithms are not able to solve the large-scale optimization
problems such as our problem in this work.

We compared various optimization algorithms in both classical and intelligent


optimization methods. According to the problem’s conditions, limitation and our
goals the intelligent optimization methods are better suited for solving our
problem.

After an accrue pre-study we find Genetic algorithm as a suitable optimization


algorithm to solving our problem. In the next chapter we analyze Genetic
algorithm.

54
CHAPTER 5
Genetic Algorithm

5.1 Introduction

As we mentioned in the last chapter, genetic algorithm is a type of evolutionary


algorithms that was introduced by John Holland. As the other evolutionary
algorithm, GA uses the biological processes of reproduction and natural selection
to solve for the ‘fittest’ solutions [17].

Genetic algorithm solves both constrained and unconstrained multi objective


optimization problems and it is also able to solve problems with discontinuous,
stochastic, highly nonlinear or ill-defined objective function [18]. As we explained
in the last chapter, classical optimization methods are not able to solve a wide
range of redundancy allocation problem. A recent study that published in
’Reliability Engineering and System Safety’ shows that genetic algorithm is an e
cient meta-heuristic method to solving combinatorial optimization problems [19].

In this chapter, we introduce, illustrate and discuss genetic algorithm as suitable


algorithm for solving our problem.

5.2 Structure

Before we begin describing the structure of GA, we need to define the solution
encoding which called ‘chromosomes’ in GA’s concept. In fact, the chromosomes
are a set of parameters which propose a solution to the initial problem and they
are represented as a simple string[19]. The design of the chromosome and the
parameters depends on the initial problem. We describe our design for both
chromosomes and parameters in chapter 5.

CHAPTER 5. GENETIC ALGORITHM

55
We summarize the structure of genetic algorithm as:

1. fitness function

2. initial population of chromosomes

3. selection of parents for generation of new population

4. crossover to produce next generation of chromosomes

5. random mutation

Figure 5.1 represents the structure of GA with a numerical example:

Randomly chosen crossover point

010111010110 100010101010 1st Parent genetic code

101101101010 011010101010 2st Parent genetic code

After crossover

010111010110 011010101010 1st O↵spring genetic code

101101101010 100010101010 2st O↵spring genetic code

Figure 5.1: Genetic code of the parents and the o↵spring before and after the
crossover

5.3 Applications and Advantages

The most important GAs application is optimization. GA is suitable method to


solving both synthetic and numerical problems such as graph coloring, routing,
partitioning and also TSP. Machine learning is also the second most important
GAs usage which can be categorized as

• Prediction and classification

56
• Prediction of weather

• Designing neural networks

Population genetics, Ecology (model symbiosis), Immune systems, Automatic


programming and filter design are also another important applications of genetic
algorithms [20].

CHAPTER 5. GENETIC ALGORITHM

By using GA, we avoid the local optimization and have a chance to finding a
global optimization solution for the problem and the solution becomes better and
better with time.

By using GA, the fitness functions are able to be changed from iteration to
iteration that allows incorporating new data in the model if it becomes available
[21].

GA supports the multi objective optimization which is a most important benefit


with GA. The modular genetic algorithm (MGA) separate from application;
building blocks are able to use in hybrid applications 4 which is another GA’s
advantage.

We use Genetic algorithm as a suitable optimization algorithm in the next chapter.

5.4 Conclusions

The goal of this chapter was introduction to genetic algorithm. We fined genetic
algorithm as a robust, fast and suitable optimization algorithm to optimize our
problem. We analyzed the structure of genetic algorithm and discussed some

4 Is a combination elements of web application and native application

57
advantage of using genetic algorithm in this chapter. In the next chapter we
implement genetic algorithm in our maintenance optimization model.

As future work for optimization part of this project we are going to examine
another optimization algorithms such as: ant colony optimization algorithm,
artificial bee colony algorithm and natural evolution strategies algorithm.

CHAPTER 6
Maintenance Optimization Model

6.1 Introduction

As we mentioned before, maintenance activities is at the largest level divided into


two major areas, corrective maintenance activities (CM) and preventive
maintenance activities (PM). Corrective maintenance is, per definition, performed
as a response to a system failure while preventive maintenance is performed when
the system is operational and to avoid future system failure. When optimizing
maintenance activities, by a maintenance plan or policy, we seek to find the best
activities to perform at each point in time, be it PM or CM. The optimization of
these activities is in large a↵ected by their financial implications for a specific
corporation, where given two equivalent systems (mechanical or otherwise) under
similar operations may require two quite di↵erent maintenance policies for two
di↵erent corporations. A concise review and analysis of di↵erent maintenance
optimization models can be found in [1]. In the article the authors describe several
models for analytical optimization of PM policies and mention computer
simulation as a good tool whenever simplifications of systems, to make them
analytically tractable, would lead to unrealistic results. In light of this we have
focused our e↵orts towards a simulation approach to maintenance optimization
with the benefit of a capability to optimize more complex systems.

58
6.2 Simulation Framework Model

In this section we introduce a framework model for simulation of a stochastic


system, the reader may think of it in terms of a mechanical system operating under
some cooperate environment and subject to corrective and preventive
maintenance activities. Consider a discretization of time into time-steps t

59
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

and a description of a system in which all events fill up a whole number of such
time-steps. By making t su ciently small, such a model can describe the system
with arbitrary precision. We consider a time horizon of T such discrete time units.
At each point in time the system is described in all important aspects by a state
vector S 2RM, the current state includes the system time (multiple of t) and any
variables describing the components of the system.

Furthermore, consider three types of events, random events which happen


stochastically depending on the evolution of the system state, plannable events
that may happen by choice depending on the current system state and a default
event that happen whenever neither a random nor a plannable event occur. Let P
= {Pi} be the set of plannable events, R = {Ri} be the set of random events and D
be the default event. All events are considered as functions that does nothing else
than change the current state, Pi,Ri,D : RM !RM. Let ri be functions ri : RM ! [0,1]
corresponding to each random event Ri such that ri(St) is the probability that the
event Ri was triggered before time t. Let pi be functions (called decidors) pi : RM !
{True,False} corresponding to each plannable event Pi such that if pi(St) = True
then the plannable event Pi is triggered at time t and if pi(St) = False the event is
not triggered at time t.

Decidor

DecisionData DecisionMethods

Figure6.1:ThestructureofaDecidor

For the rationale of this simulation system we also assume that if several plannable
and random events compete to run at a specific time, the order in which they are
executed has limited e↵ect on the results of the simulation. We also assume that

60
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

the plannable events, when executed, has no significant increasing e↵ect on the
probability of the random events to be triggered in the next time-step.

6.3 The Simulation Algorithm

Consider Figure 6.2 describing the major structure of our simulation algorithm.

Pi D Ri
t=0 t T
pi r¯i ri r¯i

Figure 6.2: Schematic view of our simulation algorithm.

We begin by setting the current state to the initial state and in each iteration of the
algorithm we first execute all plannable events Pi for which pi(S) = True, we then
randomize the level ¯ri at which the random events will trigger in the future,
execute the default event D, execute each random event Ri for which the current
level exceeds the trigger level (ri r¯i), and we iterate this until our termination-
time T is reached. A more detailed description can be found in the following
algorithm.

1: S Sinitial

2: while t T do

3: for all i where pi(S) = True do


4: S Pi(S)

5: end for
6: for all i do
7: r¯i sample from uniform distribution on [ri(S),1]

8: end for
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

9: S D(S)

10: for all i where ri(S) r¯i do


11: S Ri(S)

12: end for


13: end while

Note that the state S may change for subsequent i in the for-loops at lines 3 and
10 and our implementation iterates through i in increasing order, of course the
values taken of i depends on the number of plannable and random events
respectively.

Normally, since we have randomly triggered events, we would need to run this
simulation a number of times, perhaps 10 to 1000 repetitions, and collect statistics
for the state variables in S as they develop over time. In our implementation we
gather the first two moments of all state variables for each point in time, that is,
their mean value and their mean square value.

6.4 Example Problem Class

The framework model and simulation algorithm in the previous sections allows
for easy modeling of many real-world systems. In a specific model one of the
most important decisions to make is the time discretization t. Making it too small
will increase the run-time of any simulation and making it too large will introduce
greater error in the simulation output and subsequent decisions based on the
output. Another key factor is choosing specific ways in which to encode the
decidors pi. One of the simplest decidors, which we have used in this paper and
specifically in our example problem class, is the linear decidor.

A linear decidor pi in our framework model is a real vector pi 2RM such that the
outputted decision value, True or False, is equal to the truth value of the statement

62
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

pi · S > 1. This allows the decision to perform a plannable event to depend to


di↵erent degrees on di↵erent elements in the state vector S. In many cases then,
when modeling a maintenance system, all aspects of the system is constructed to
describe the real world system in su cient detail, while the decidors are the desired
output to be chosen to give optimal performance for the maintenance policy.

Since there is only one type of decidors (one decision method) in our example
problem class and data for all decidors are real vectors of the same size, we may
describe the decision process in a more compact form. Let M be the matrix defined
by

where N is the number of plannable events in the system. Furthermore, consider


the current system state S to be a column vector. Then the elements in the vector

D = MS,

will take a value di 1 for each plannable event Pi which is decided to be


performed.

Our simulation algorithm however, will not in the current version execute all such
decided plannable events, but only in e↵ect iterate i from 1 to N and if di 1 perform
the event and in anticipation of the next value for i recompute D. A variation to
this method could be to calculate D, then execute the event with greatest di, then
recalculate D and again pick the event with the greatest di, until all di < 1, in which
case the simulation would continue to execute the default event as usual. This
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

method will add overhead since the matrix multiplication may need recalculation
several times, but might add extra decision intelligence by letting the decidors
have di↵erent degrees of preference for the execution of di↵erent plannable
events.

In this section we provide a method for constructing members from a class of


problems, using linear decidors, such that these problems in many aspects could
be considered to describe real world maintenance activities on systems with one
or more components. Specifically, the single most important outputted value of
the simulations is the expected profit at the end of the time horizon T. We also
allow for the components to have evolving e ciency measures that a↵ect the profit
development over time.

Let Nc be the number of components in or problem system, this value is a free


parameter in the class and can for example be chosen randomly. Each such
component is granted a number of independent state variables sij, a part of the total
state vector S.

For each component we allow for any number of randomly triggered events Ri.
Each such event has a probability distribution rij which is a Weibull distribution
with a randomly chosen scale ( ) and shape (k). To compute the probability levels
as functions of the state r(S) we introduce a linear dependency on the independent
state variables associated with the component (ai · S). That is, r(S) = r(ai · S).

For each component we also allow for a number of dependent state variable in a
similar fashion, eij. These state variables are computed from the current
independent state variables associated with the component by a function f(S) =
f(vi·S), where f is a randomly chosen Weibull distributions and vi is

64
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
suitable randomly chosen dependency vector. We interpret these dependent
variables as a measure of e ciency for some function of the component which soon
will be seen to a↵ect simulated profit.

What remains in our example class is to define, actually parameterize, the actual
event functions Pi,Ri,D.

We generate the plannable and the random events Pi,Ri in the same fashion: S MS
+ w, where M is a matrix of suitable dimensions and w a vector, both chosen
randomly such that the new values for the dependent state values associated with
the corresponding component are free to change, but only dependent on the old
values for these state variables. The only other state variables allowed to change
is the time t and the current profit, time must be advanced by an integer amount
and the profit must be increased by a constant term. The default event D is
somewhat di↵erent. It may add a constant term to any dependent state for any
component s, the current profit must increase by addition of a term p and may also
decrease linearly by a factor ce multiplied by any dependent state variable (e
ciency measure) and the time must increase by 1 ( t).

PiR iD Decidor

Simulation

Fitness

Figure6.3:Illustrationofthemodel

65
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.5 A Numerical Example

We selected the following example system, randomized from our stated problem
class. The number of components is Nc = 2. Horizon time is T = 504. The first
component has two random events (failures) and one independent state variable.
The second component has only one random event and also one independent state
variable. Both components have two e ciency measures.

k
a M w t

Table 6.1: Parameters defining random events.

Component Event M w t p

1 1 0 0 1 -1

1 2 0 0 1 -1

2 1 0 0 1 -1
Table 6.2: Parameters defining plannable events.

Component E ciency k v

1 1 504 0.55770 0.36798

1 2 504 1.2823 0.70517

2 1 504 0.85805 0.071498

2 2 504 0.51121 0.34842

66
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Table 6.3: Parameters defining eciency measures.

Table 6.4: Parameters defining the default event.

Figure 6.4: Results from optimization of linear decidors in the example problem.

In Figure 6.4 the expected value for the relevant state variables from 2000
iterations of a simulation of the example problem is shown.

The linear decidors of the system (plannable events) was optimized using a
genetic algorithm with the expected profit at the time horizon (T) as fitness
function. All 6 graphs are rescaled to lie between 0 and 1 to allow placement in
the same figure.

The two green curves shows the two independent state variables (one for each
component). The red spikes represent a high probability of having a plannable
event at those points in time. Where there is an increased probability of the
plannable events to occur there is also a visible drop in the corresponding state
67
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
variable because M and w are both 0 for all plannable events, e ciently resetting
the states (component age) to zero.

The fitness can be seen to increase approximately linearly, which is a common


feature of maintenance planning simulations. As can be seen, the red spikes,
denoting a high probability for plannable events, decrease in height at later times.
This is likely a result from the increased number of random events that has
occurred on average at later times, causing a decreased determinability of the
system.

Figure 6.5 in the next page shows the convergence of the expected profit at the
end of horizon T for the genetic algorithm used. We begin with an expected profit
of 377 and after 272 generations of the optimization we have reached a fairly long
plateau beginning at a profit of 421 and we terminate the algorithm after 512
generations with a profit of 423.

Figure 6.5 shows convergence of expected profit at the end of horizon T for the
genetic algorithm used to optimize the linear decidors in the example problem.

68
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

Figure 6.5: Convergence of expected profit.

We summarize our algorithm as a flowchart in the next page. This approach allows
a better understanding of the process.

69
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL

70
CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.6 Conclusions

In this chapter we introduced our maintenance optimization model. We defined


three di↵erent events: random, plannable and default events with di↵erent tasks.
We also defined the decidor for each plannable event and described the structure
and function of a decidor. In section 6.3 we illustrated our simulation algorithm
with details.

In section 6.4 we explained a specific model and introduced the linear decider as
a type of decidors and formulated the decision process in a compact form. We
finished this chapter with a numerical example by using genetic algorithm and
analyzed the result.

The main goal of this chapter was creating a simulation approach to maintenance
optimization with the benefit of a capability to optimize more complex systems.

71
CHAPTER 7
Summary, Conclusions and Future
Work

7.1 Summary

As was discussed in the abstract, this work is split into two separate parts, one
dealing with decision making (chapter 3) and one dealing with maintenance
optimization model (chapter 6). The aim has been to create an optimization model
for maintenance problem. The component health has been examined in chapter 2.
Several criteria have been identified for DSS in chapter 3 and a maintenance
decision has been found. We analyzed di↵erent optimization methods and
algorithm in chapter 4 and Genetic algorithm had been chosen as a suitable
optimization algorithm (see chapter 5). The maintenance plan, algorithm,
simulation and optimization are summarized in chapter 6 and then in section 6.5
the results of the numerical example is discussed.

7.2 Conclusions

We have provided some introduction to maintenance planning optimization.


Simulation and suitable optimization algorithms provides freedom in describing
complex systems otherwise intractable by analytical methods. The method of
finding optimal decidors defined by some parameterization of the decision logic
allows for prompt establishment of a fair approximation to the optimal use of
resources. Although some additional layers are useful, perhaps necessary, to
establish a clear decision support for human consideration. The decidors make
their decision only as a function of the current state and so provides only indirect
decision patterns for planning of future activities.

CHAPTER 7. SUMMARY, CONCLUSIONS AND FUTURE WORK

72
One good way to transition from the decidors, and the statistics collected for their
decisions on a given system, is the realization that decisions closer in time are
more readily understandable and predictable. As a system evolves, the inherent
stochastic nature makes for example precise timing of maintenance activities
more and more di cult. In light of this we may consider optimizing a plan for the
close future in which activities are decided and fixed, and will be performed unless
unexpected events occur, in which case a new good close future plan must be
found. For a one-component system such a short-time plan may be based on ‘time
to first intervention’, where we for all possible times to first the plannable event
consider enforcing the plan, and in the simulations, if unexpected behavior occur,
fall back on the decidors to evaluate a near optimal ‘just in time’ decision policy
for the future. This provides the decision maker with a nice single-value (profit)
expected result for planning the first intervention on di↵erent times, and may
apply additional non-modeled preferences on the decision based on these
additional simulations.

7.3 Future Work

There are three main lines of this research which should be pursued. Firstly, the
work on the criteria that e↵ect on the component health. It is our intention to
investigate thoroughly the various criteria, with a view to measuring how di↵erent
criteria a↵ect the di↵erent components. To obtain this goal we need to have an
accurate data mining with some tools such as Rapid Miner. A second line, which
follows from chapter 2, is to investigate and calculate the system reliability by the
equations that we introduced and proved in chapter 2. As a last line, we are going
to examine the result by di↵erent optimization algorithm such as artificial bee
colony algorithm, Memetic algorithms etc.

CHAPTER 8

73
Summary of reflection of objectives in the thesis

We summarize our views of the objectives achieved in this thesis in this chapter.

8.1 Objective 1 - Knowledge and Understanding

In writing this thesis I had to research many journal articles in mechanical,


mathematical modeling, simulation and optimization.

In the process of writing chapter 4, I used a broad range of optimization books as


source material. Originally I compared a lot of optimization methods and
algorithms in details, and the material grew to many pages and I decided under
recommendation of my examiner and supervisor to present the main result of the
chapter in a more concise format.

At the research level, I recognized that the initial problem is a multi- objective
optimization problem, and therefore I transfered the problem to a fuzzy
environment, I find 15 criteria which a↵ect the component health. I also analyze
the structure of the component and also studied a wide range of articles in di↵erent
areas of multi-criteria decision making, then I find TOPSIS as a suitable method
for DSS.

In section 2.5.1, I find Riccati di↵erential equation as a suitable equation to express


the system reliability.

CHAPTER 8. SUMMARY OF REFLECTION OF OBJECTIVES IN


THE THESIS

8.2 Objective 2 -Methodological Knowledge

In writing this thesis I find two di↵erent approaches for decision making. The first
approach is a combination of fuzzy decision making and TOPSIS method. As
second approach, I used genetic algorithm for optimization.

74
For the reader get a better understanding about the background to the chapters I
began every chapter with a short introduction and definitions.

8.3 Objective 3 - Critically and Systematically Integrate Knowledge

The sources that I used in the thesis was obtained by searches at Ma¨lardalen
University computer system and also journal articles which I found in the
University library and through Google search.

I divided the thesis in three di↵erent phases: pre-study, execution and


implementation. In the pre-study phase I read a broad range of articles about
maintenance policies and optimization. I identified the problem and indicators in
these phases.

The execution phase contains design, make and test the initial algorithm with
Matlab. The implementation phase, is a collaboration between me and Jonas
Osterberg, the initial algorithm was implemented in JavaScript.¨

8.4 Objective 4 - Independently and Creatively Identify and Carry out


Advanced Tasks

In section 2.5.1 I was used Riccati di↵erential equation to finding Eq. 2.16 which
is a combination of two equations that was introduced by Yuo-Tern Tsai and Kuo-
Shong.

The numerical example in section 3.8, is defined by the author. I identified 15


criteria which a↵ect on health of component. To calculate the relationship between
criteria and alternatives, I used my knowledge in solid mechanics and strength of
materials and I also discussed about the results with senior mechanical engineers
in the maintenance department at Scania.

CHAPTER 8. SUMMARY OF REFLECTION OF OBJECTIVES IN

75
THE THESIS

8.5 Objective 5 - Present and Discuss Conclusions and Knowledge

I believe that this thesis is understandable for readers who have a moderate
familiarity with English and moderate knowledge in optimization topics. The
reader need to have a good knowledge in DSS to understand the di↵erent methods
that we used in this project for decision making (such as TOPSIS and fuzzy
decision making).

I assigned chapter 1to definitions and terms and also tried to define all concepts
which are required to understand the topic. In this project, I also bring some
numerical example to get a better understanding.

I explained the future work in the conclusions section, which is to explore the use
of data mining with Rapid Miner to get a more accurate model for the health of
components.

8.6 Objective 7 - Scientific, Social and Ethical Aspects

This thesis was a part of the IRIS-project, (in Swedish: Integrerat dynamiskt
prognostiserande underh˚allsst¨od) which is a five-year project in Scania’s
Research and Development department in Sweden. The approval process for this
project at Scania involves review and approval at each phase of project
development.

To experience the results, we had scheduled meeting with the IRIS-project team
every week and also with my supervisor Dr. Jonas Biteus who is project leader
for the IRIS-project.

I also presented this project in the ICNPAA 2014 Congress [47] in Norway
(Mathematical problems in engineering, aerospace and sciences). The at this point
unpublished congress article has the title: ‘ Solving complex maintenance

76
planning optimization problems using stochastic simulation and multicriteria
fuzzy decision making ‘ and was written under supervision of professor Sergei
Silvesterov, Jonas Osterberg and Dr. Jonas Biteus.¨

APPENDIX A
Java code - Event Triggers

In this appendix we list the three types of event triggers used in our simulations.

Trigger: The superclass describing a decision to perform an event at a specific


time. The prepare method is called at the beginning of every event dispatch
iteration and allows the implementors of this class to adjust their behavior
depending on a change in system state. The decide method is called whenever this
trigger is asked if its payload event should be executed. The mutate and mate
methods are used by the genetic algorithm for constructing new triggers from one
or two ancestors, leaving the specifics of the mutations up to the implementing
classes.

AlwaysTrigger: Represents the decision to perform the event whenever it is asked


to decide. This is suitable for “default” events, representing continuous normal
system operation.

StochasticTrigger: This trigger o↵ers stochastic behavior due to a change in the


value method from the prepare phase to the decide phase.

LinearTrigger: This trigger returns a True decision if the linear combination of the
state variables exceeds 1. The coe cients for the linear combination are stored in
the val array, and these values will change on a mutate or mate method call.

77
APPENDIX A. JAVA CODE - EVENT TRIGGERS

1 package com.vuebe.vhen;
2
3 import java.util.Random;
4
5 public abstract class Trigger {
6 public static Random random = new Random();
7 public Event event;
8
9 public abstract void prepare(State state);
10
11 public abstract boolean decide(State state);
12
13 public Trigger mutate(double p) {
14 return this;
15 }
16
17 public Trigger mate(Trigger b, double interpol) {
18 return this;
19 }
20
21 public Trigger random() {
22 return this;
23 }
24 }
25
26
27
28 package com.vuebe.vhen;

78
APPENDIX A. JAVA CODE - EVENT TRIGGERS

29
30 public class AlwaysTrigger extends Trigger {
31
32 public AlwaysTrigger(Event event) {
33 this.event = event;
34 }
35
36 @Override
37 public void prepare(State state) {
38 }
39
40 @Override
41 public boolean decide(State state) {
42 return true;
43 }
44
45 }
46
47
48 package com.vuebe.vhen;
49
50 public abstract class StochasticTrigger extends Trigger {
51 private double value;
52 private double triggervalue;
53
54 public StochasticTrigger(Event event) {
55 this.event = event;
56 }

79
APPENDIX A. JAVA CODE - EVENT TRIGGERS

57
58 public abstract double value(State state);
59
60 @Override
61 public void prepare(State state) {
62 value = value(state);
63 if (value < 0)
64 value = 0;
65 if (value > 1)
66 value = 1;
67 triggervalue = value + random.nextDouble() * (1 - value);
68 }
69
70 @Override
71 public boolean decide(State state) {
72 value = value(state);
73 if (value < 0)
74 value = 0;
75 if (value > 1)
76 value = 1;
77 return value >= triggervalue;
78 }
79 }
80
81
82
83 package com.vuebe.vhen;
84

80
APPENDIX A. JAVA CODE - EVENT TRIGGERS

85 import java.util.Random;
86
87 import com.vuebe.util.Util;
88
89 public class LinearTrigger extends Trigger {
90 public double[] val;
91 public double[] mask;
92
93 public LinearTrigger(Event event, double[] val, double[] mask) {
94 this.event = event;
95 this.val = val;
96 this.mask = mask;
97 }
98
99 public LinearTrigger(Event event, double[] val) {
100 this.event = event;
101 this.val = val;
102 this.mask = new double[val.length];
103 setMask();
104 }
105
106 public LinearTrigger(Event event, State S) {
107 this.event = event;
108 this.val = new double[S.val.length];
109 this.mask = new double[val.length];
110 setMask();
111 }
112

81
APPENDIX A. JAVA CODE - EVENT TRIGGERS

113 public void setMask() {


114 for (int i = 0; i < mask.length; i++)
115 mask[i] = 1;
116 }
117
118 public void setMask(int i) {
119 mask[i] = 1;
120 }
121
122 public void clearMask() {
123 for (int i = 0; i < mask.length; i++)
124 mask[i] = 0;
125 }
126
127 public void clearMask(int i) {
128 mask[i] = 0;
129 }
130
131 @Override
132 public LinearTrigger mutate(double p) {
133 double[] mal = new double[val.length]; 134 for (int i = 0; i < mal.length;
i++) {
135 if (random.nextDouble() < p) {
136 double v = val[i];
137 mal[i] = v + (random.nextDouble() * 2 - 1) * mask[i];
138 if (mal[i] < 0)
139 mal[i] = 0;
140 } else {

82
APPENDIX A. JAVA CODE - EVENT TRIGGERS

141 mal[i] = val[i];


142 }
143 }
144 return new LinearTrigger(event, mal, mask);
145 }
146
147 @Override
148 public Trigger mate(Trigger b, double interpol) {
149 if (b instanceof LinearTrigger) {
150 LinearTrigger B = (LinearTrigger) b;
151 double[] va = val;
152 double[] vb = B.val;
153 double[] vm = new double[va.length];
154 for (int i = 0; i < vm.length; i++) {
155 double v1 = va[i];
156 double v2 = vb[i];
157 vm[i] = v1 + (v2 - v1) * interpol;
158 }
159 return new LinearTrigger(event, vm, mask);
160
161 } else
162 throw new RuntimeException("Invalid Trigger type combination.");
163 }
164
165 public Trigger random() {
166 double[] vm = new double[val.length]; 167 for (int i = 0; i <
vm.length; i++) {
168 vm[i] = mask[i] * random.nextDouble() * 0.001;

83
APPENDIX A. JAVA CODE - EVENT TRIGGERS

169 }
170 return new LinearTrigger(event, vm, mask);
171 }
172
173 @Override
174 public void prepare(State state) {
175 }
176
177 @Override
178 public boolean decide(State state) {
179 double sum = 0;
180 for (int i = 0; i < val.length; i++) {
181 sum += val[i] * state.val[i];
182 }
183 return sum >= 1;
184 }
185
186 public String toString() {
187 return "val " + Util.toString(val) + " mask " + Util.
toString(mask);
188 }
189 }

84
APPENDIX B
Java code - Genetic Algorithm

In this appendix we list the Java code for the genetic algorithm used in our
simulations.

Individual: The genetic algorithm implementation will consider a population of


individuals, but the specifics of these individuals are subject to customization by
implementing this class. Specifically, the evaluate method will compute the
fitness of this individual and the getValue method will return the last evaluated
fitness. The fitness function may be stochastic and so the evaluate method may
have to be called more than once.

GA: This is the main genetic algorithm implementation providing possibility to


optimize the fitness function for an individual and its mutated o↵spring. The
initiatePopulation method provides the algorithm with a seed individual and from
it constructs a population. This population is then evolved, ideally to a better
fitness, by the evolve method. The evolve method may be run any number of
times, and the currently best individual can then be retrieved by the
getBestIndividual method.

1 package com.vuebe.ga;
2
3 public interface Individual extends Comparable<Individual> {
4
5 /** Compute the value of this individual. */
6 public double evaluate();
7
8 /** Return the computed value of this individual. */
9 public double getValue();

85
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

10
11 public Individual mutate(double p);
12
13 public Individual[] mate(Individual mate);
14
15 public Individual random();
16 }
17
18
19
20 package com.vuebe.ga;
21
22 import java.util.Collections;

23 import com.vuebe.vhen.*;

24 import java.util.Random;

25 import java.util.Vector;

26
27 import com.vuebe.util.Util;
28
29 public class GA {
30 public static final int THREADCOUNT = 8;
31
32 public static final int TOURNAMENT = 0;
33 public static final int ROULETTE_WHEEL = 1;
34
35 private int Ne = 2; // Elite individuals
36 private int Ns = 50; // Selection individuals
37 private int N = Ne + Ns; // Population size N=Ne+Ns

86
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

38 private double Pmut = 0.2; // Mutate probability


39 private int selectionmethod = TOURNAMENT; // Selection
method
40
41 private int Nt = 6; // Tournament size
42 private double Pt = 0.5; // Tournament probability
43
44 private Random random = new Random();
45
46 private Vector<Individual> population;
47 private int generation;
48 private int evaluations;
49
50 public Individual getBestIndividual() {
51 Individual ind = population.get(0);
52 double best = ind.getValue();
53 for (int i = 1; i < population.size(); i++) {
54 Individual ind2 = population.get(i);

55 double value = ind2.getValue();

56 if (value > best) {

57 best = value;

58 ind = ind2;

59 }

60 }

61 return ind;

87
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

62 }
63
64 public void setSelectionSize(int ns) {

65 if (ns < 0 || (ns & 1) != 0)

66 throw new RuntimeException();

67 this.Ns = ns;

68 this.N = Ns + Ne;

69 }
70
71 public void setEliteSize(int ne) {

72 if (ne < 0)

73 throw new RuntimeException();

74 this.Ne = ne;

75 this.N = Ns + Ne;

76 }
77
78 public int getGeneration() {

79 return generation;

80 }
81
82 public int getEvaluations() {

83 return evaluations;

88
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

84 }
85
86 public void initiatePopulation(Individual seed) {
87 population = new Vector<Individual>();
88 evaluations = 0;
89 for (int i = 0; i < N; i++) {
90 Individual individual = seed.random();
91 individual.evaluate();
92 evaluations++;
93 population.add(individual);
94 }
95 }
96
97 public void sort() {
98 Collections.sort(population);
99 }
100
101 public void evolve() {
102 // Util.dump("Evolve");
103 sort();
104
105 String s = "";
106 for (int i = 0; i < population.size(); i++) {
107 s += population.get(i).getValue() + " ";
108 }
109 // Util.dump("Values: " + s);
110
111 Vector<Individual> nextpopulation = new Vector<Individual >();

89
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

112
113 // Selection
114 if (selectionmethod == TOURNAMENT) {
115 for (int tourn = 0; tourn < Ns / 2; tourn++) {
116 Vector<Individual> tour = new Vector<Individual>();
117 for (int i = 0; i < Nt; i++) {
118 tour.add(population.get(random.nextInt(population. size())));
119 }
120 Collections.sort(tour);
121
122 Individual first = null;
123 Individual second = null;
124 int firstindex = -1;
125 combine: while (true) {
126 double P = Pt;
127 for (int i = Nt - 1; i >= 0; i--) {
128 if (i != firstindex && random.nextDouble() < P) {
129 if (first == null) {
130 first = tour.get(i);
131 firstindex = i;
132 } else {
133 second = tour.get(i);
134 Individual[] children = first.mate(second);
135
136 children[0] = children[0].mutate(Pmut);
137 children[1] = children[1].mutate(Pmut);
138
139 nextpopulation.add(children[0]);

90
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

140 nextpopulation.add(children[1]);
141
142 children[0].evaluate();
143 children[1].evaluate();
144
145 evaluations += 2;
146
147 break combine;

148 }

149 }

150 P = P * (1 - Pt);
151
152 }

153 }
154
155 }

156 } else

157 throw new RuntimeException("Unimplemented selection


method.");
158 // Elite
159 for (int i = 0; i < Ne; i++) {
160 Individual ind = population.get(population.size() - 1 -
i);
161 ind = ((com.vuebe.vhen.System) ind).clone();
162 ind.evaluate();
163 nextpopulation.add(ind);

91
APPENDIX B. JAVA CODE - GENETIC ALGORITHM

164 }
165
166 generation++;
167 population = nextpopulation;
168 }
169 }

92
APPENDIX C
Java code - Simulation Classes

In this appendix we list the Java code for simulation classes used in our simulations.

State: The Represents a system state consisting of a set of double values (real
numbers) in the val field. The state variables have names by the corresponding
array values in the name field. The dep array provides the possibility to have some
state variables dynamically computed as a function of the other state variables.

Event: The superclass of all events that may occur in the simulation, the events only
result is to change the values of the state variables in the run method.

StateFunction: Used in the State class to provide the possibility for dynamically
computed state variables.

System: This is the main simulation class realizing the simulation algorithm. The
system is defined by the initialstate field and the set of triggers (and the event they
trigger) in the triggers field. The toMatlab method creates a string in Matlab
syntax constructing the experimental data constructed from a number of
simulations performed on this system from the simulate method. The class also
provides mutation and mating methods since the class can be used as an individual
in the genetic optimization algorithm. The main simulation algorithm is contained
in the simulateOne method, performing a simulation of the system over the entire
time horizong horizon. This method is a bit more generally written than the
algorithm described previously in the report for experimentation purposes and
easability for adaptation to more complex scenarios. The algorithm, given a
correctly ordered list of event triggers, works equivalently except for a short time
step time di↵erence.

1 package com.vuebe.vhen;
2

93
APPENDIX C. JAVA CODE - SIMULATION CLASSES

3 import java.util.Hashtable;
4
5 public class State {
6 public Hashtable<String, Integer> ind = new Hashtable<
String, Integer>();
7 public String[] name;
8 public double[] val;
9 public StateFunction[] dep;
10
11 public State(State s) {
12 ind = (Hashtable<String, Integer>) s.ind.clone();
13 val = s.val.clone();
14 dep = s.dep.clone();
15 name = s.name;
16 }
17
18 public State(String... vars) {
19 for (int i = 0; i < vars.length; i++) {
20 if (ind.put(vars[i], i) != null) {
21 throw new RuntimeException("Duplicate variable");
22 }

23 }

24 val = new double[vars.length];

25 this.name = vars.clone();

26 }
27

94
APPENDIX C. JAVA CODE - SIMULATION CLASSES

28 public State(String[] vars, StateFunction[] dep) {


29 this(vars);
30 this.dep = dep.clone();
31 if (dep.length != vars.length)
32 throw new RuntimeException("Invalid dependency function count.");
33 }
34
35 public String toString() {
36 String s = "";
37 for (int i = 0; i < name.length; i++)
38 s += name[i] + "=" + val[i] + " ";
39 return s;
40 }
41
42 public void computeDependent() {
43 if (dep == null)
44 return;
45 for (int i = 0; i < dep.length; i++) {
46 if (dep[i] != null)
47 val[i] = dep[i].value(this);
48 }
49 }
50
51 public int index(String var) {
52 return ind.get(var);
53 }
54
55 }

95
APPENDIX C. JAVA CODE - SIMULATION CLASSES

56
57
58
59 package com.vuebe.vhen;
60
61 public abstract class Event {
62 public abstract void run(State state);
63 }
64
65
66
67 package com.vuebe.vhen;
68
69 public abstract class ValueFunction {
70 public abstract double value(System system);
71 }
72
73
74
75 package com.vuebe.vhen;
76
77 public abstract class StateFunction {
78 public abstract double value(State state);
79 }
80
81
82
83 package com.vuebe.vhen;

96
APPENDIX C. JAVA CODE - SIMULATION CLASSES

84
85 import java.util.Random;
86 import java.util.Vector;
87
88 import com.vuebe.ga.Individual;
89 import com.vuebe.util.Util;
90
91 public class System implements Individual {
92 public static Random random = new Random();
93
94 private State initialstate;
95 private int horizon;
96 private ValueFunction valuefunction;
97 public Vector<Trigger> triggers = new Vector<Trigger>();
98 public Vector<String> triggername = new Vector<String>();
99
100 // derived or simulated
101 int itime;
102
103 public int N;

104 public double[][] M; // State Mean


105 public double[][] V; // State Variance
106 public double[][] C; // Trigger counts
107
108 public double value;

109 private int simulationcount;


110

97
APPENDIX C. JAVA CODE - SIMULATION CLASSES

111 public String toMatlab() {


112 StringBuilder sb = new StringBuilder();
113
114 sb.append("iterations=" + N);
115 sb.append(";\n");
116
117 sb.append("statevars=" + Util.toMatlab(initialstate.name) );
118 sb.append(";\n");
119
120 for (int i = 0; i < M.length; i++) {
121
122 sb.append("M" + i + "="
123 + Util.toMatlab(Util.multiplyNew(M[i], 1d / N)));
124 sb.append(";\n");
125
126 sb.append("V" + i + "="
127 + Util.toMatlab(Util.multiplyNew(V[i], 1d / N)));
128 sb.append(";\n");
129 }
130
131 sb.append("M={");
132 for (int i = 0; i < M.length; i++) {
133 sb.append("M" + i);
134 if (i != M.length - 1)
135 sb.append(",");
136 }
137 sb.append("}\n");
138

98
APPENDIX C. JAVA CODE - SIMULATION CLASSES

139 sb.append("V={");
140 for (int i = 0; i < M.length; i++) {
141 sb.append("V" + i);
142 if (i != M.length - 1)
143 sb.append(",");
144 }
145 sb.append("}\n");
146
147 for (int i = 0; i < C.length; i++) {
148 sb.append("C" + i + "="
149 + Util.toMatlab(Util.multiplyNew(C[i], 1d / N)));
150 sb.append(";\n");
151 }
152
153 sb.append("C={");
154 for (int i = 0; i < C.length; i++) {
155 sb.append("C" + i);
156 if (i != C.length - 1)
157 sb.append(",");
158 }
159 sb.append("}\n");
160
161 sb.append("triggername={");
162 for (int i = 0; i < C.length; i++) {
163 sb.append("’" + triggername.get(i) + "’");
164 if (i != C.length - 1)
165 sb.append(",");
166 }

99
APPENDIX C. JAVA CODE - SIMULATION CLASSES

167 sb.append("}\n");
168
169 return sb.toString();
170 }
171
172 public State getInitialState() {
173 return initialstate;
174 }
175
176 public Trigger getTrigger(int index) {
177 return triggers.get(index);
178 }
179
180 public void setInitialState(State S) {
181 initialstate = S;
182 itime = initialstate.index("time");
183 }
184
185 public void setValueFunction(ValueFunction vf) {
186 this.valuefunction = vf;
187 }
188
189 public void setHorizon(int horizon) {
190 this.horizon = horizon;
191 }
192
193 public void setSimulationCount(int simulationcount) {
194 this.simulationcount = simulationcount;

100
APPENDIX C. JAVA CODE - SIMULATION CLASSES

195 }
196
197 public int getHorizon() {
198 return horizon;
199 }
200
201 public int addTrigger(Trigger trigger, String name) {
202 triggers.add(trigger);
203 triggername.add(name);
204 return triggers.size() - 1;
205 }
206
207 public void dump() {
208 Util.dump("Initial: " + Util.toString(initialstate.val));
209 Util.dump("Horizon: " + horizon);
210 for (int i = 0; i < triggers.size(); i++) {
211 Util.dump("Trigger " + i + ": " + triggers.get(i).
toString());
212 }
213 }
214
215 private void simulateOne() {
216 State S = new State(initialstate);
217 S.computeDependent();
218 Random rand = new Random();
219 S.val[itime] = 0;
220 while (S.val[itime] < horizon) {
221 // prepare triggers

101
APPENDIX C. JAVA CODE - SIMULATION CLASSES

222 for (int i = 0; i < triggers.size(); i++)


223 triggers.get(i).prepare(S);
224 // decide triggers
225 for (int i = 0; i < triggers.size(); i++) {
226 if (triggers.get(i).decide(S)) {
227 int t0 = (int) Math.round(S.val[itime]);
228 if (t0 < horizon)
229 C[i][t0]++;
230
231 triggers.get(i).event.run(S);
232 S.computeDependent();
233
234 int t1 = (int) Math.round(S.val[itime]);
235 for (int t = t0; t < t1 && t < horizon; t++) {
236 for (int p = 0; p < S.val.length; p++) {
237 double v = S.val[p];
238 M[p][t] += v;
239 V[p][t] += v * v;
240 }
241 }
242 }
243 }
244 }
245 }
246
247 public void simulate() {
248 // Util.dump("Sim");
249 // dump();

102
APPENDIX C. JAVA CODE - SIMULATION CLASSES

250 M = new double[initialstate.val.length][];


251 V = new double[initialstate.val.length][];
252 C = new double[triggers.size()][];
253 for (int i = 0; i < M.length; i++) {
254 M[i] = new double[horizon];
255 }
256 for (int i = 0; i < V.length; i++) {
257 V[i] = new double[horizon];
258 }
259 for (int i = 0; i < C.length; i++) {
260 C[i] = new double[horizon];
261 }
262
263 for (N = 0; N < simulationcount; N++) {
264 simulateOne();
265 }
266
267 if (valuefunction != null) {
268 value = valuefunction.value(this);
269 }
270 }
271
272 private System cloneNoTriggers() { 273 System s
= new System();
274 s.setInitialState(initialstate);
275 s.setHorizon(horizon);
276 s.setValueFunction(valuefunction);
277 s.itime = itime;

103
APPENDIX C. JAVA CODE - SIMULATION CLASSES

278 s.simulationcount = simulationcount;


279 s.triggername = triggername;
280 return s;
281 }
282
283 public System cloneMutate(double p) {
284 System s = cloneNoTriggers();
285 for (int i = 0; i < triggers.size(); i++) {
286 s.triggers.add(triggers.get(i).mutate(p));
287 }
288 return s;
289 }
290
291 public System clone() {
292 System s = cloneNoTriggers();
293 for (int i = 0; i < triggers.size(); i++) {
294 s.triggers.add(triggers.get(i));
295 }
296 return s;
297 }
298
299 public System[] cloneMate(System mate) {
300 double interpol = random.nextDouble() * 1.4 - 0.2;
301 System s1 = cloneNoTriggers();
302 System s2 = cloneNoTriggers();
303 for (int i = 0; i < triggers.size(); i++) {
304 s1.triggers.add(triggers.get(i)
305 .mate(mate.triggers.get(i), interpol));

104
APPENDIX C. JAVA CODE - SIMULATION CLASSES

306 s2.triggers.add(triggers.get(i).mate(mate.triggers.get(
i),
307 0.5 - interpol));
308 }
309 return new System[] { s1, s2 };
310 }
311
312 public System cloneRandom() {
313 System s = cloneNoTriggers();
314 for (int i = 0; i < triggers.size(); i++) { 315
s.triggers.add(triggers.get(i).random());
316 }
317 return s;
318 }
319
320 @Override
321 public int compareTo(Individual ind) {
322 return (int) Math.signum(getValue() - ind.getValue());
323 }
324
325 @Override
326 public double evaluate() {
327 simulate();
328 return value;
329 }
330
331 @Override
332 public double getValue() {

105
APPENDIX C. JAVA CODE - SIMULATION CLASSES

333 return value;


334 }
335
336 @Override
337 public Individual mutate(double p) {
338 return cloneMutate(p);
339 }
340
341 @Override
342 public Individual[] mate(Individual mate) {
343 return cloneMate((System) mate);
344 }
345
346 @Override
347 public Individual random() {
348 return cloneRandom();
349 }
350 }

106
APPENDIX D
Java code - Example Problem Class

In this appendix we list the Java code for the construction and analyzis of instanses
of the example problem class, Section 6.4.

SystemFactory: The class that constructs systems dynamically and randomly.


Several parameters for the constructed system can be specified, such as the
number of “components” NC, the number of state variables per component sv0 to
sv1. After specifying the desired parameters the system can be created by a call
to the createSystem method.

TestFactory: The class is a main program entry point through the main method
and subsequently the run method. A SystemFactory object is first constructed,
some parameters specifyed for the desired system and the system is then created
by the createSystem method call. This created system is then fed as the seed
Individual to a genetic algorithm by the ga.initiatePopulation(sys); statement. The
genetic algorithm is then run 10000 times and the best individual after the
optimization is saved to file in Matlab format for analyzis and plotting in Matlab.

1 package com.vuebe.vhen;
2
3 import java.util.Random;
4 import java.util.Vector;
5
6 import com.vuebe.math.WeibullDistribution;
7 import com.vuebe.util.Util;
8
9 public class SystemFactory {
10 public Random random = new Random(0);

107
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

11
12 // T=T1*T2*T3, hours days weeks
13 public int T;
14 public int T1 = 24;
15 public int T2 = 7;
16 public int T3 = 3;
17
18 // Number of components
19 public int NC;
20
21 // number of state variables per component
22 public int sv0;

23 public int sv1;


24
25 // Failure Age count

26 public int FA0;

27 public int FA1;

28 // weibull probability distributioin parameters


29 public double FAscalemin;
30 public double FAscalemax;
31 public double FAshapemin;
32 public double FAshapemax;
33 // description of age dependency vectors
34 public double FAdependencymax;
35
36 // Efficiency age count

108
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

37 public int EA0;


38 public int EA1;
39 // weibull probability distributioin parameters
40 public double EAscalemin;
41 public double EAscalemax;
42 public double EAshapemin;
43 public double EAshapemax;
44 // description of age dependency vectors
45 public double EAdependencymax;
46
47 //
48 public double NEprofitmin;
49 public double NEprofitmax;
50 public double NEprofitefficiencypenaltymax;
51 public double NESVaddmax;
52
53 //

54 public boolean useFE;


55 public double FEmul0;
56 public double FEmul1;
57 public double FEadd0;
58 public double FEadd1;
59 public int FEtime0;
60 public int FEtime1;
61 public double FEfitness0;
62 public double FEfitness1;
63 //

109
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

64

65 public boolean useDE;


66 public double DEmul0;
67 public double DEmul1;
68 public double DEadd0;
69 public double DEadd1;
70 public int DEtime0;
71 public int DEtime1;
72 public double DEfitness0;
73 public double DEfitness1;
74
75 // creations
76 public Component[] c;
77 public int totalSVindex;
78 public int totalSVcount;
79 public int totalFAindex;
80 public int totalFAcount;
81 public int totalEAindex;
82 public int totalEAcount;
83 public int itime;
84 public int ifitness;
85 public State initialstate;
86 public double NEfitness;
87 public double[] NEefficiencyfitness;
88 public double[] NESVadd;
89
90 public void dump() {
91 Util.dump(initialstate.toString());

110
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

92 Util.dump("T=" + T);
93 Util.dump("NC=" + NC);
94 for (int i = 0; i < c.length; i++) {
95 Util.dump("COMPONENT " + i);
96 c[i].dump();
97 }
98 // Normal Event
99 Util.dump("------");
100 Util.dump("NEfitness=" + NEfitness);
101 Util.dump("NEefficiencyfitness=" + Util.toString(
NEefficiencyfitness));
102 Util.dump("NESvadd=" + Util.toString(NESVadd));
103 }
104
105 public System createSystem() {
106 System sys = new System();
107 T = T1 * T2 * T3;
108
109 sys.setHorizon(T);
110 c = new Component[NC];
111 for (int i = 0; i < NC; i++) {
112 c[i] = new Component();
113 }
114
115 Vector<String> statevars = new Vector<String>();
116 Vector<StateFunction> statefunc = new Vector<
StateFunction>();
117

111
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

118 totalSVindex = statevars.size();


119 totalSVcount = 0;
120 for (int i = 0; i < NC; i++) {
121 Component com = c[i];
122 com.svindex = statevars.size();
123 for (int j = 0; j < com.svcount; j++) {
124 statevars.add("s" + i + "_" + j);
125 statefunc.add(null);
126 }
127 totalSVcount += com.svcount;
128 }
129
130 totalFAcount = 0;
131 totalFAindex = statevars.size();
132 for (int i = 0; i < NC; i++) {
133 Component com = c[i];
134 com.FAindex = statevars.size();
135 for (int j = 0; j < com.FAcount; j++) {
136 statevars.add("f" + i + "_" + j);
137 statefunc.add(com.createFAFunc(j));
138 }
139 totalFAcount += com.FAcount;
140 }
141
142 totalEAcount = 0;
143 totalEAindex = statevars.size();
144 for (int i = 0; i < NC; i++) {
145 Component com = c[i];

112
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

146 com.EAindex = statevars.size();


147 for (int j = 0; j < com.EAcount; j++) {
148 statevars.add("e" + i + "_" + j);
149 statefunc.add(com.createEAFunc(j));
150 }
151 totalEAcount += com.EAcount;
152 }
153
154 statevars.add("time");
155 statefunc.add(null);
156
157 statevars.add("fitness");
158 statefunc.add(null);
159
160 initialstate = new State(statevars.toArray(new String[0])
,
161 statefunc.toArray(new StateFunction[0]));
162 itime = initialstate.index("time");
163 ifitness = initialstate.index("fitness");
164 sys.setInitialState(initialstate);
165
166 // Normal Event
167 sys.addTrigger(createNE(), "n");
168
169 // Failure Events
170 if (useFE) {
171 for (int i = 0; i < NC; i++) {
172 Component com = c[i];

113
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

173 com.FEindex = sys.triggers.size();


174 for (int j = 0; j < com.FAcount; j++) {
175 sys.addTrigger(com.FEtrigger[j], "f" + i + "_" + j)
;
176 }
177 }
178 }
179
180 // Decidable Events
181 if (useDE) {
182 for (int i = 0; i < NC; i++) {
183 Component com = c[i];
184 com.DEindex = sys.triggers.size();
185 for (int j = 0; j < com.FAcount; j++) {
186 sys.addTrigger(com.createDETrigger(j), "d" + i + "_
" + j);
187 }
188 }
189 }
190
191 // Value Function
192 sys.setValueFunction(new ValueFunction() {
193 @Override
194 public double value(System system) {
195 double[] p = system.M[initialstate.index("fitness")];
196 return p[p.length - 1] / system.N;
197 }
198 });

114
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

199
200 return sys;
201 }
202
203 public Trigger createNE() {
204 NEfitness = inDouble(NEprofitmin, NEprofitmax); 205
NEefficiencyfitness = new double[totalEAcount];
206 NESVadd = new double[totalSVcount];
207
208 for (int i = 0; i < totalEAcount; i++) { 209
NEefficiencyfitness[i] = -inDouble(0,
NEprofitefficiencypenaltymax);
210 }
211 for (int i = 0; i < totalSVcount; i++) {
212 NESVadd[i] = inDouble(0, NESVaddmax);
213 // Util.dump("totalSVcount "+totalSVcount+" : "+NESVadd [i]);
214 }
215
216 return new AlwaysTrigger(new Event() {
217 @Override
218 public void run(State state) {
219 state.val[itime]++;
220 state.val[ifitness] += NEfitness;
221 for (int i = 0; i < totalEAcount; i++) {
222 state.val[ifitness] += NEefficiencyfitness[i]
223 * state.val[totalEAindex + i];
224 }
225 for (int i = 0; i < totalSVcount; i++) {

115
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

226 double old = state.val[totalSVindex + i];


227 state.val[totalSVindex + i] += NESVadd[i];
228 }
229 }
230 });
231 }
232
233 public int inInt(int a, int b) {
234 return a + random.nextInt(b - a + 1);
235 }
236
237 public double inDouble(double a, double b) {
238 return a + random.nextDouble() * (b - a);
239 }
240
241 public class Component {
242 public int svindex; // index in state
243 public int svcount; // state-variables count
244
245 public int FAindex;

246 public int FAcount; // component age count


247 public double[] FAscale;

248 public double[] FAshape;

249 public WeibullDistribution[] FAfunc;

250 public double[][] FAdependency;


251

116
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

252 public int FEindex;

253 public double[][] FEmul;

254 public double[][] FEadd;

255 public int[] FEtime;

256 public double[] FEfitness;

257 public Trigger[] FEtrigger;


258
259 public int DEindex;

260 public double[][] DEmul;

261 public double[][] DEadd;

262 public int[] DEtime;

263 public double[] DEfitness;

264 public LinearTrigger[] DEtrigger;


265
266 public int EAindex;

267 public int EAcount; // efficiency age count


268 public double[] EAscale;

269 public double[] EAshape;

270 public WeibullDistribution[] EAfunc;

271 public double[][] EAdependency;


272
273 public double NEfitness;

117
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

274 public double[] NEefficiencyfitness;


275
276 public void dump() {

277 Util.dump("FAscale=" + Util.toString(FAscale));


278 Util.dump("FAshape=" + Util.toString(FAshape)); 279 for (int i
= 0; i < FAdependency.length; i++) {
280 Util.dump("FAdependency" + i + "="
281 + Util.toString(FAdependency[i]));
282 }
283 for (int i = 0; i < FEmul.length; i++) {
284 Util.dump("FEmul" + i + "=" + Util.toString(FEmul[i])
);
285 }
286 for (int i = 0; i < FEadd.length; i++) {
287 Util.dump("FEadd" + i + "=" + Util.toString(FEadd[i])
);
288 }
289 Util.dump("FEtime=" + Util.toString(FEtime));
290 Util.dump("FEfitness=" + Util.toString(FEfitness));
291 Util.dump("------");
292 for (int i = 0; i < DEmul.length; i++) {
293 Util.dump("DEmul" + i + "=" + Util.toString(DEmul[i])
);
294 }
295 for (int i = 0; i < DEadd.length; i++) {
296 Util.dump("DEadd" + i + "=" + Util.toString(DEadd[i])
);

118
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

297 }
298 Util.dump("DEtime=" + Util.toString(DEtime));
299 Util.dump("DEfitness=" + Util.toString(DEfitness));
300 Util.dump("------");
301 Util.dump("EAscale=" + Util.toString(EAscale));
302 Util.dump("EAshape=" + Util.toString(EAshape)); 303 for (int i =
0; i < EAdependency.length; i++) {
304 Util.dump("EAdependency" + i + "="
305 + Util.toString(EAdependency[i]));
306 }
307
308 }
309
310 public Component() {
311 svcount = inInt(sv0, sv1);
312
313 // Failure age
314 FAcount = inInt(FA0, FA1);
315 FAscale = new double[FAcount];
316 FAshape = new double[FAcount];
317 FAfunc = new WeibullDistribution[FAcount];
318 for (int i = 0; i < FAcount; i++) {
319 FAscale[i] = inDouble(FAscalemin * T, FAscalemax * T)
;
320 FAshape[i] = inDouble(FAshapemin, FAshapemax);
321 FAfunc[i] = new WeibullDistribution(FAscale[i],
FAshape[i]);
322 }

119
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

323 FAdependency = new double[FAcount][];


324 for (int i = 0; i < FAcount; i++) {
325 FAdependency[i] = new double[svcount];
326 for (int j = 0; j < svcount; j++)
327 FAdependency[i][j] = inDouble(0, FAdependencymax);
328 }
329
330 // Efficiency age
331 EAcount = inInt(EA0, EA1);
332 EAscale = new double[EAcount];
333 EAshape = new double[EAcount];
334 EAfunc = new WeibullDistribution[EAcount];
335 for (int i = 0; i < EAcount; i++) {
336 EAscale[i] = inDouble(EAscalemin * T, EAscalemax * T)
;
337 EAshape[i] = inDouble(EAshapemin, EAshapemax);
338 EAfunc[i] = new WeibullDistribution(EAscale[i],
EAshape[i]);
339 }
340 EAdependency = new double[EAcount][];
341 for (int i = 0; i < EAcount; i++) {
342 EAdependency[i] = new double[svcount];
343 for (int j = 0; j < svcount; j++)
344 EAdependency[i][j] = inDouble(0, EAdependencymax);
345 }
346
347 // Failure Event
348 FEmul = new double[FAcount][];

120
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

349 FEadd = new double[FAcount][];


350 FEtime = new int[FAcount];
351 FEfitness = new double[FAcount];
352 FEtrigger = new Trigger[FAcount];
353 for (int i = 0; i < FAcount; i++) {
354 final int index = i;
355 FEmul[i] = new double[svcount];
356 FEadd[i] = new double[svcount];
357 for (int j = 0; j < svcount; j++) {
358 FEmul[i][j] = inDouble(FEmul0, FEmul1);
359 FEadd[i][j] = inDouble(FEadd0, FEadd1);
360 }
361 FEfitness[i] = inDouble(FEfitness0, FEfitness1);
362 FEtime[i] = inInt(FEtime0, FEtime1);
363 FEtrigger[i] = new StochasticTrigger(new Event() {
364 @Override
365 public void run(State state) {
366 for (int i = 0; i < svcount; i++) {
367 state.val[svindex + i] = state.val[svindex + i]
368 * FEmul[index][i] + FEadd[index][i];
369 }
370 state.val[itime] += FEtime[index];
371 state.val[ifitness] += FEfitness[index];
372 }
373 }) {
374
375 @Override
376 public double value(State state) {

121
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

377 double sum = 0;


378 for (int j = 0; j < svcount; j++)
379 sum += FAdependency[index][j] 380 * state.val[svindex + j];
381 // Util.dump("sum "+sum);
382 return FAfunc[index].value(sum);
383 }
384 };
385 }
386
387 // Decidable Event
388 DEmul = new double[FAcount][];
389 DEadd = new double[FAcount][];
390 DEtime = new int[FAcount];
391 DEfitness = new double[FAcount];
392 DEtrigger = new LinearTrigger[FAcount];
393 for (int i = 0; i < FAcount; i++) {
394 final int index = i;
395 DEmul[i] = new double[svcount];
396 DEadd[i] = new double[svcount];
397 for (int j = 0; j < svcount; j++) {
398 DEmul[i][j] = inDouble(DEmul0, DEmul1);
399 DEadd[i][j] = inDouble(DEadd0, DEadd1);
400 }
401 DEfitness[i] = inDouble(DEfitness0, DEfitness1);
402 DEtime[i] = inInt(DEtime0, DEtime1);

403 }
404

122
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

405 }
406
407 public LinearTrigger createDETrigger(final int index) {
408 DEtrigger[index] = new LinearTrigger(new Event() {
409 @Override
410 public void run(State state) {
411 for (int i = 0; i < svcount; i++) {
412 state.val[svindex + i] = state.val[svindex + i]
413 * DEmul[index][i] + DEadd[index][i];
414 }
415 state.val[itime] += DEtime[index];
416 state.val[ifitness] += DEfitness[index];
417 }
418 }, initialstate);
419 DEtrigger[index].clearMask();
420 for (int j = 0; j < svcount; j++) {
421 DEtrigger[index].setMask(svindex + j);
422 }
423 return DEtrigger[index];
424 }
425
426 public StateFunction createFAFunc(int i) {
427 return createFunc(FAdependency[i], FAfunc[i]);
428 }
429
430 public StateFunction createEAFunc(int i) {
431 return createFunc(EAdependency[i], EAfunc[i]);
432 }

123
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

433
434 public StateFunction createFunc(final double[] dependency
,
435 final WeibullDistribution func) {
436 return new StateFunction() {
437 @Override
438 public double value(State state) {
439 double age = 0;
440 for (int j = 0; j < svcount; j++) {
441 age += dependency[j] * state.val[svindex + j];
442 }
443 return func.value(age);
444 }
445
446 };
447 }
448 }
449 }
450
451
452
453
454 package com.vuebe.vhen;
455
456 import com.vuebe.ga.GA;
457 import com.vuebe.math.WeibullDistribution;
458 import com.vuebe.util.Util;

124
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

459

460 public class TestFactory {


461 public System bestsystem = null;
462
463 public void run() {
464 Util.dump("Test Factory BEGIN");
465 final SystemFactory sf = new SystemFactory();
466 sf.NC = 2;
467
468 sf.sv0 = 1;
469 sf.sv1 = 1;
470
471 sf.NEprofitmin = 1;
472 sf.NEprofitmax = 1;
473 sf.NEprofitefficiencypenaltymax = 1;
474 sf.NESVaddmax = 1;
475
476 sf.EA0 = 1;

477 sf.EA1 = 2;

478 sf.EAscalemin = 1;

479 sf.EAscalemax = 1;

480 sf.EAshapemin = 0.5;

481 sf.EAshapemax = 3;

482 sf.EAdependencymax = 1; // penaltyfor inefficiency


coefficient max
483

125
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

484 sf.FA0 = 1; // Failure events min

485 sf.FA1 = 2;

486 sf.FAscalemin = 0.05;

487 sf.FAscalemax = 0.2;

488 sf.FAshapemin = 1;

489 sf.FAshapemax = 2;

490 sf.FAdependencymax = 1;

491
492 sf.useFE = true;
493 sf.FEmul0 = 0;
494 sf.FEmul1 = 0;
495 sf.FEadd0 = 0;
496 sf.FEadd1 = 0;
497 sf.FEtime0 = 2;
498 sf.FEtime1 = 4;
499 sf.FEfitness0 = -5;
500 sf.FEfitness1 = -10;
501
502 sf.useDE = true;
503 sf.DEmul0 = 0;
504 sf.DEmul1 = 0;
505 sf.DEadd0 = 0;
506 sf.DEadd1 = 0;
507 sf.DEtime0 = 1;
508 sf.DEtime1 = 1;

126
APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS

509 sf.DEfitness0 = -1;


510 sf.DEfitness1 = -1;
511
512 System sys = sf.createSystem();
513
514 GA ga = new GA();
515 sys.setSimulationCount(2000);
516 ga.setSelectionSize(10);
517 ga.setEliteSize(2);
518 ga.initiatePopulation(sys);
519 for (int i = 0; i < 10000; i++) {
520 ga.evolve();
521 bestsystem = (System) ga.getBestIndividual();
522 Util.dump("N: " + ga.getEvaluations() + " Value: " 523 +
bestsystem.getValue());
524 }
525 Util.saveString(bestsystem.toMatlab(),
526 "C:/Users/jog04/Dropbox/scania/filedump", ".m");
527
528 Util.dump("Test Factory END");
529 }
530
531 public static final void main(String[] args) {
532 new TestFactory().run();
533 }
534 }

127
References

[1] Jennions, K. Integrated Vehicle Health Management, Business Case Theory


and Practice, SAE International. USA, 2013.

[2] ANSI Essential Requirements: Due process requirements for American


National Standards, American National Standard Institute. 2014. available
at: [www.ansi.org/essentialrequirements]

[3] Venkatrarman, K. Maintenance engineering and management, Chartered


Institution of Building Services Engineers. 2007.

[4] Practical Application of Reliability-Centered Maintenance, by the


Reliability Analysis Center, issued in 2003.

[5] Wongmongkolrit, S. Work Force Scheduling for mixture policy of Preventive


and Corrective maintenance, IEEE International Conference on Industrial
Engineering and Engineering Management. 2008.

[6] R. Fritzsche,J.N.D. Gupta, R. LaschOptimal prognostic distance to minimize


total maintenance cost: The case of the airline industry, Int. J. Production
Economics 151 (2014) 76–88, 2014.

[7] Mettas. A, Reliability allocation and optimization for complex systems,


proceedings of the annual reliability and maintainability symposium,
Institute of Electrical and Electronics Engineers. California, USA, 2000.

[8] Barlow.R, Hunter.L,Optimum Preventive Maintenance Policies, Volume 8


Issue 1,pp. 90-100, January-February 1960.

[9] Wang. H, Hoang.P, Reliability and Optimal Maintenance, Springer Series in


Reliability Engineering, 2002.

128
REFERENCES

[10] Azar. S, Reynolds. B , Narayanan. S,Comparison of two multi-objective


optimization techniques with and within genetic algorithms, Proceedings of
the ASME Design Engineering Technical Conferences, Las Vegas, Nevad,
1999.

[11] Yogesh, J. Design and optimization of thermal systems, CRC Press Inc.
1998.

[12] Optimization Tips & Tricks, Available from: www.math.uwaterloo.ca, 2014

[13] Wehrens. R, Buydens. L,Classical and Non-classical Optimization Methods,


Encyclopedia of Analytical Chemistry R.A. Meyers (Ed.) pp. 9678–9689.
2000.

[14] Blum. C, Roli. A,Metaheuristics in combinatorial optimization: Overview


and conceptual comparison 35 (3). ACM Computing Surveys. pp. 268–
308.2003.

[15] Talbi. E-G, Metaheuristics: from design to implementation. Wiley. 2009.

[16] Jakudo. J, Co-evolution Algorithm and its Applications. Iba lab. M1


37106458, 2010.

[17] Mitchell. M, An Introduction to Genetic Algorithms, CRC Press Inc. 1988.

[18] Bauer. R, Genetic Algorithms and Investment Strategies, John Wiley and
Sons Inc. 1994.

[19] Ardakan. M, & Hamadani,A.Reliability optimization of series–parallel


systems with mixed redundancy strategy in subsystems, Reliability
Engineering and System Safety. 132–139. 2014.

[20] Holland. J, Langton. C , Wilson.S, Genetic Programming, Cambridge,


Massachusetts. 2013.

129
REFERENCES

[21] Melanie. M, An Introduction to Genetic Algorithms, London, England. 1999.

[22] Patrick. D, O’Connor. T, Practical Reliability Engineering, 4th ed., John


Wiley , Sons, UK. 2010.

[23] Jones.J, Integrated Logistics Support Handbook,McGraw-Hill Logistics


Series, page 4.2, 2006.

[24] Pyzdek T,The Six Sigma Handbook, McGraw-Hill. New York, USA,
2003.

[25] Misra. KB, Reliability analysis and prediction, Elsevier. Amsterdam, 1992.

[26] Klutke. G , Kiessler. P, Wortman. M, A Critical Look at the Bathtub Curve,


IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, page 125-
129, 2003

[27] Kumar. A, Srividya. A, Reliability and Safety Engineering, Springer, E-book,


2010.

[28] Strongflex,a manufacturer and exporter of conveyor belt and conveyor


accessory in China.2014. available at: [www.conveyor-idler.com/old/HDPE-
Roller-Idler.html]

[29] Kamboj. V, Bhardwaj. A, Mathematical model of reliability assessment for


generation system, Power Engineering and Optimization Conference
(PEDCO) Melaka,

[30] Ingra↵ea. R, Schwalbe.K, Engineering Fracture Mechanics, Cornell


University, Ithaca, New York, USA, 2013.

[31] Labeaua. P, Smidts .C, Dynamic reliability: towards an integrated platform


for probabilistic risk assessment, Volume 68, Issue 3, Pages 219–254, 2000.

130
REFERENCES

[32] Albert. A, Gardner. L,Stochastic Approximation and Non Linear Regression,


MIT Press, 2003.

[33] Bittanti. S, Laub. A, Willems.J, The Riccati Equation, Springer Berlin


Heidelberg, 1991.

[34] Labeaua. P.E, Smidts. C, Dynamic reliability: towards an integrated


platform for probabilistic risk assessment, Volume 68, Issue 3, Pages 219–
254, 2000.

[35] Tern Tsai. Y , Wang. K, Optimizing preventive maintenance for mechanical


components using genetic algorithms, Reliability Engineering & System
Safety Volume 74, Issue 1, Pages 89–97, 2001.

[36] Y.Shi , Wang. K, A Deep Study of Fuzzy Implications, Faculty of Science of


Ghent University, Pages 2-7, 2009.

[37] B. De Baets , E. E. Kerre, Fuzzy relational compositions , Fuzzy set and


systems 60, Pages 109-120, 1993

[38] J. Yang , J. Watada, fuzzy clustering analysis of data mining: Application to


an accident mining system, International Journal of Innovative Computing,
Information and Control, Pages 5715–5724, 2012

[39] LA. Zadeh, Fuzzy sets, Information and Control, Pages 338–353, 1965

[40] C. YU , Y. Luo, A Fuzzy Optimization Method for Multi-Criteria Decision


Making Problem Based on the Inclusion Degrees of Intuitionistic Fuzzy Sets
, Journal of Information and Computing Science, Pages 146152, 2008

[41] K. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems, Pages 87-96,
1986

[42] J. Fan , W. Xie, Subsethood measure: new definitions, Fuzzy Sets and
Systems, Pages 201-209, 1999

131
REFERENCES

[43] Kulkarni. V.G, Modeling and analysis of stochastic system, Chapman &
Hall- CRC.1995.

[44] Yoon. K.P , Hwang. C, Multiple Attribute Decision Making: An Introduction.


California: SAGE publications. 1995.

[45] Probab, J.A, Imperfect maintenance in a generalized competing risks


framework,Volume 43, 825-839, 2006.

[46] Gustafsson. A, Wassberg. S, Ekonomiska konsekvenser till f¨oljd av


oplanerade stillest˚and , Examensarbete, Link¨oping universitet, 2013.

[47] Tahvili. S, Osterberg.J, Silvesterov. S, Biteus.J,¨ Solving complex


maintenance planning optimization problems using stochastic simulation
and multi- criteria fuzzy decision making, ICNPAA 2014 Congress, Narvik,
Norway, (in press).

132

You might also like