Mit El 80 015 06849216

Download as pdf or txt
Download as pdf or txt
You are on page 1of 299

W

QUALITATIVE AND QUANTITATIVE RELIABILITY


ANALYSIS OF SAFETY SYSTEMS

by

R. Karimi, N. Rasmussen and L. Wolf

Energy Laboratory Report No. MIT-EL 80-015


May 1980
"L;V.-

iln..

r-;

?C1·'

·M-

14·

··L

4L-r

":

iY

iir
QUALITATIVE AND QUANTITATIVE RELIABILITY
ANALYSIS OF SAFETY SYSTEMS

by

R. Karimi, N. Rasmussen and L. Wolf

Energy Laboratory
and
Department of Nuclear Engineering

Massachusetts Institute of Technology


Cambridge, Massachusetts 02139

sponsored by

Boston Edison Company

under the

MIT Energy Laboratory Electric Utility Program

Energy Laboratory Report No. MIT-EL 80-015

May 1980
2

QUALITATIVE QUANTITATIVE RELIABILITY

ANALYSIS OF SAFETY SYSTEMS


by

Roohollah Karimi

Submitted to the Department of Nuclear Engineering on May, 1980


in partial fulfillment of the requirements for the degree of
Doctor of Science.

ABSTRACT

A code has been developed for the comprehensive analysis


of a fault tree. The code designated UNRAC (UNReliability
Analysis Code) calculates the following characteristics of an
input fauTt tree:

a) minimal cut sets,


b) top event unavailability as point estimate and/or
in time dependent form,
c) quantitative importance of each component
involved, and
d) error bound on the top event unavailability

UNRAC can analyze fault trees, with any kind of gates (EOR,
NAND, NOR, AND, OR), up to a maximum of 250 components and/or
gates.

For generating minimal cut sets the method of bit manipu-


lation is employed. In order to calculate each component's
time dependent unavailability, a general and consistent set of
mathematical models is developed and the repair time density
function is allowed to be represented by constant, exponen-
tial, 2nd order erlangian and log-normal distributions. A
normally operating component is represented by a three-state
model in order to be able to incorporate probabilities for
revealed faults, non-revealed faults and false failures in
unavailability calculations.

For importance analysis, a routine is developed that will


rearrange the fault tree to evaluate the importance of each
component to system failure, given that a component and/or a
sub-system is unavailable (ie. down or failed). The impor-
tance of each component can be evaluated based on the instan-
taneous or average unavailabilities of each components. To
simulate the distribution of top event uncertainty, a Monte-
Carlo sampling routine is used. This method allows the user
to input uncertainties on the component's failure characteri-
stics (ie. failure rate, average test time, average repair
3

time, etc.) and assign different distributions for subsequent


simulation.

The code is benchmarked against WAMCUT, MODCUT, KITT,


BIT-FRANTIC and PL-MODT. The results showed that UNRAC pro-
duces results more consistent with the KITT results than
either BIT-FRANTIC or PL-MODT. Overall it is demonstrated
that UNRAC is an efficient and easy to use code and has the
advantage of being able to do a complete fault tree analysis
with this single code.

Thesis Advisor: Norman C. Rasmussen


Title: Head of the Department of Nuclear
Engineering
_i

sl

1_

UII·

iii

li·

"i

i*r

C-
5

TABLE OF CONTENTS

Page

ABSTRACT ............ * ee.. .......


o . eo.. ..
e.*e, o....
e. . 2

.............
ACKNOWLEDGEMENTS ............... 4

TABLE OF CONTENTS............ 5

LIST OF FIGURES ............. 9

LIST OF TABLES .
teeeeelle
leececile
.. . . . . . 13

CHAPTER 1: INTRODUCTION ... 16

1.1 Reliability Concept and Me thods of


Evaluations .............. ....... e. 16
1.2 Fault Tree and Reliability' Analysis 19
1.3 Fault Tree Evaluation .... ,......... 22

1 .3.1 Qualitative Analysis .......... 22

1 .3.2 Quantitative Analysis ......... 23

1.4 Objective of Thesis ............... 26

1.5 Structure of Thesis ............... 27

1.6 Glossary of Words ................. 28

CHAPTER 2: ON THE ESSENTIAL OF RELIABILITY & PROB-


ABI LITY THEORY ........................ 33

2.1 Introduction and Basic Concept .... 33

2.2 Reliability Model of Non-Maintained


Systems ........................... 38

2.3 Reliability Model of Maintained


Systems .......................... 41

2.3.1 Introduction .................. 41

2.3.2 Semi-Markovian Model .......... 48


6

TABLE OF CONTENTS (Cont.)

Page

2.3.3 Flow Graph and Mason's Rule ... 55

2.4 Reliability Model of Periodically


Maintained Systems ................ 63

CHAPTER 3: PRINCIPLE STRUCTURE OF THE CODE UNRAC .. 80

3.1 Introduction ...................... 80

3.2 Cut Set Generator.................. 82

3.2.1 Introduction .................. 82

3.2.2 Cut Set Generator Used In


UNRAC . ...... . ........... 84

3.3 Unavailability Evaluator .......... 98

3.3.1 Introduction .................. 98

3.3.2 Unavailability Evaluator Used


In UNRAC ...................... 104

3.4 Uncertainty Bounds Evaluator ...... 117

3.4.1 Introduction .................. 117

3.4.2 Monte-Carlo Simulator Used In


UNRAC ... ...................... 122

3.5 On The Distribution of Input


Variables ...................... 128

3.6 Moment Matching Method ............ 130

CHAPTER 4: APPLICATION AND RESULTS .. ..............137

4.1 Introduction ... ........... 137

4.2 Auxiliary Feed Water System (AFWS),


A Comparison With WASH 1400 ....... 137
7

TABLE OF CONTENTS -
(Cont.) -

Page

4.3 An Example of An Electrical System,


Comparison of UNRAC with FRANTIC
and BIT-FRANTIC ................... 164
4.4 A Chemical and Volume Control
System (CVCS) ................... 170
4.5 A High Pressure Injection System
(HPIS), Comparison of WASH 1400 and
UNRAC.............................
CHAPTER 5: SUNLMARY, CONCLUSIONS AND RECOMMENDATIONS 187

5.1 Summary and Conclusions .......... 187

5.2 Recommendations ................. 192

REFERENCES *@**g O
** .e.c..
s s 0 mee .· ee
aeee.e e eo
*O 0 194

APPENDIX A: DETAILED MATHEMATICAL EXPRESSIONS OF


EQUATIONS (2.39) AND (3.1) ... ,..,,.,.. 203

A.1 Detailed Mathematical Form of Eqn.


(2.39) for Exponential, 2nd Order
Erlangian and Log-Normal ....... , 204
A.l.1 Exponential Repair Time Density
Function ...................... 204
A.1.2 2nd Order Erlangian Repair Time
Density Function ,........,..,. 205

A.1.3 Log-Normal Repair Time Density


Function
.........
ee.eq,,o 206
A.2 Detailed Mathematical Form of the
Eqn. (3.1) for Different Repair
Density Functions ......... ,,,,, 208

A.2.1 Exponential Repair Time Density


Function ................ ,,,.. 208
8

TABLE OF CONTENTS (Cont.)

Page

A.2.2 2nd Order Erlangian Repair Time


Function...................... 211

A.2.3 Log-Normal Repair Time Density


Function ............. ....... 216

A.3 Solution of Equations of the Form .

x = f(x) .......................... 225

APPENDIX B: SOME BASIC BACKGROUND AND RELATED ,...

EXAMPLES USED IN SECTION 3.1 .......... 228

B.1 Mathematical Background for Boolean


Operation ............. t....... 229

B.2 Application of UNRAC to the Two


Fault Tree Examples, for Comparison
with MODCUT and WAM-CUT ......... 232

B.3 On the Evaluation of the Equivalent


Component's Parameters ........... 238

B.4 *......
References
............
*** 240

APPENDIX C: ON THE CODE STRUCTURE AND INPUT


DESCRIPTION TO UNRAC ......... ...... - 241

.
C.1 CodeStructure ................. 242

C.2 On the Random Number Generator and


Sorting Routine Used in the Monte-
Carlo Simulator MCSIM ............. 246

C.3 INPUT Description of UNRAC ........ 250

C.4 .........*o
References ............ 267

C.5 On the Input Sample Fault Tree .... 268


9

LIST OF FIGURES

Figure Title Page

1.1 Overall reliability concept ........... 17

1.2 Typical structuring of a fault tree ... 21

2.1 Typical failure rate behavior ....... 40

2.2 A simple two-state transition diagram . 46

2.3 A typical transition diagram .......... 49

2.4 Flow graph of equation (2.27) ......... 56

2.5 Flow graph of equation (2.27) ......... 56

2.6 Flow graph representation for the con-


tinuous time Semi-Markovian process ... 62

2.7 A typical time dependent unreliability


of a periodically tested component .... 64

2.8 Comparison of the unavailabilities as


calculated from equations developed
in this study and those of Vesely's
and Caldarola's for a periodically
tested component ....... ........ 75
2.9 Comparison of the application of Eqn.
(2.42) to different repair distri-
butions ......... ........ ..... ... ... 79

3.1 Flow chart and steps used in cut set


generator BIT ......................... 86

3.2 Computer time to deterministically


find minimal cut sets ................. 88

3.3 Equivalent transformation of EOR, AND,


NOR and NOT gates ......, ............ 89

3.4 Detailed information of operation and


failure modes of an electrical wiring
system ................................ 91
10

LIST OF FIGURES (Cont.)

Figure Title Page


3.5 Compact fault tree of the electrical
wiring system ......................... 92

3.6 Fault tree showing a dependence of B on


the occurrence of A 100

3.7 Dependent event connecting fault tree . 102

3.8 All possible path of a pump failure ... 105

3.9 A general three-state component ....... 107

3.10 Comparison of cumulative probability


distribution of a log-normal with its
approximated combination of exponen-
tials (median of 21 (hrs) and error
factor of 2.56) ........ 111

3.11 Comparison of cumulative probability


distribution of a log-normal with its
approximated combination of exponen-
tials (median of 21 (hrs) and error
factor of 1.4) ........................ 112

3.12 Comparison of cumulative probability


distribution of a log-normal with its
approximated combination of exponen,
tials (median of 100 (hrs) and error
factor of 3) .......................... 113

3.13 Logical flow chart used in UNRAC ...... 116

3.14 Fault tree diagram minimal cut sets and


failure data of an RPS example o.,,,, 118

3.15 with PREP& KITT,


A comparison of UNRAC
PL-MODT and BIT-FRANTIC .............. 119
3.16 The execution time of the LIMITS and
the SAMPLE codes ...................... 125

3.17 in ((
Regionin
distrRegion ,, ) plane for various
distribution; ...... 0...... . ... 135
11

LIST OF FIGURES (Cont.)

Figure Title Page

3.18 Chart for determining appropriate


Johnson distribution approximation .... 136

4.1 Simplified diagram of an AFWS ......... 139

4.2 Simplified fault tree diagram of an


AFWS .......................... 140

4.3 WASH 1400 qualitative results of an


AFWS pictorial summary ................ 142

4.4 A further reduced fault tree of AFWS


input to UNRAC ............ ............ 143

4.5 AFWS time dependent unavailabilities as


calculated by UNRAC ..................... 146

4.6 Cumulative top event distribution of


AFWIS, evaluated based on the average
unavailability of each component ...... 154

4.7 Cumulative top event distribution of


AFWS, evaluated based on the individual
component's failure characteristics ... 157

4.8 An example of an electrical system ... 166

4.9 A fault tree diagram of the example


electrical system .................. .. 167

4.10 Comparison of time dependent unavaila-


bilities of the electrical system as
calculated by UNRAC, FRANTIC and BIT-
FRANTIC
. . . ... ................
o. ..... 169
4.11 A CVCS pump trains schematic diagram .. 171
4.12 A simplified fault tree diagram of
CVCS pump trains ...................... 172
12

LIST OF FIGURES (Cont.)

Figure Title Page

4.13 CVCS time dependent unavailabilities


as calculated by UNRAC and BIT-
FRANTIC ....... . ... .176
4.14 A simplified HPIS diagram ............. 180

4.15 A simplified fault tree diagram of


the HPIS ...... ................. 181
B.1 A general Venn diagram ......,......... 230

B.2 An example fault tree used to compare


the results of BIT and MODCUT .. ,..... 233

B.3 An example fault tree used to compare


the results of BIT and WAM-CUT ....... 236

C.1 The general routines used in UNRAC .... 243

C.2 Exponential distribution random number


generator flow diagram ..... ....... 247

C,3 Normal distribution random number


generator flow diagram ..... *0....9... 248

C.4 Flow chart of sorting routine used


in MCSIM
.. ,........ ... ,,60e@ , 249
13

LIST OF TABLES

Table Title Page


2.1 Comparison of model developed in
this study for periodically main-
tained component with Vesely's and
Caldarola's model for a typical
example .................. .. 76
2.2 Application of Eqn. (2.42) to different
repair distributions ................. 78

**.***.
****
**..
3.1 Minimal cut set generated by BIT for
examplefault tree ........... ..... 97
3.2 Comparisonof results of this study with
the results of KITTfor an example fault
tree *******,
120

3.3 Summary of important characteristics of


a selected distribution ............... 131
4.1 AFWS component failure characteristics
data .............. ................... 145
4.2 Component's'importancecalculations of
AFWS using Birnbaum and Fussell-Vesely
measures .................... ,.. 149
4.3 Aux-Feed Water Component's average un-
availability error factor (EF) ......... 150

4.4 AFWS cumulative unavailability dis-


...........................
tribution 151
4.5 AFWS component failure characteristics
error factors data 156

4.6 AFWS component failure characteristics


error factors data, an example for
long tail log-normal and gamma dis-
......................
tributions ** 159
4.7 The effect of the Monte-Carlotrials
on the top event distribution ......... 160
4.8 The effect of the Monte-Carlo trials
on the top event distribution for
mixed components failures distribu-
tions......... x...
.**....*.. 161
14

LIST OF TABLES (Cont.)

Table Title Page

4.9 Importance analysis of AFWS components


given that one of the pump train is
out of service ................... 162

4.10 Component's parameters data for the


electrical example ................... 168

4.11 CVCS pump trains component's para-


meters data ... ............... 175

4.12 Comparison of the CVCS unavailability


calculations for the exponential and
second order erlangian repair dis-
tribution .............. ........... 177

A.1 Comparison of cumulative probability


distribution of a log-normal with
= 3.0445 and a = 0.57 with its
approximated combination of exponen-
tials with X = 6.319 x 10-2 and
X2 = 1.1263 x 10 - 1 ........... ........ 217

A.2 Comparison of the cumulative probability


distribution of a log-normal with
= 3.0445 and a = 02, with its approx-
of exponentials,
imated combination with
=
X1 = 8.1783 x 10-2 and 2 1.0873 x
10-1 ..
* ******
... ..
****. . 218
A.3 Comparison of the cumulative probability
distribution of a log-normal, with
= 4.60517 and-a= 0.66785, with its
approximated combination of exponen-
tials with = 1.0788 x 10-2 and
X2 = .0955 x 10 - 2 * .***.. 219

B.1 List of Boolean operations and


identities . ... ......
*e... 231
B.2 List of minimal cut sets generated by
BIT for Fig. B.2 ........ ..... ......

B.3 List of minimal cut sets generated by 237


BIT for Fig. B.3 ...................
15

LIST OF TABLES (Cont.)

Table Title Page

B.4 Equivalent failure and repair rate of


an AND/OR gate, for monitored compon-
ents ana-components with constant
unavailability ....................... 239

C.1 List of input to UNRAC for the sample


problem ............................. 270
C.2 Results of UNRAC for the sample
problem .............................. 272
1
16

CHAPTER 1: INTRODUCTION

1.1 Reliability Concept and Methods of Evaluations

Reliability analysis is a method by which the degree of

successful performance of a system under certain stipulated

conditions may be expressed in quantitative terms. In order

to establish a degree of successful performance, it is

necessary to define both the performance requirement of the

system and the expected performance achievement of the system.

The correlation between these two can then be used to formu-

late a suitable expression of reliability as illustrated in

Fig. 1.1, Green and Bourne (1972).


Since both the required and achieved performance may be

subject to systematic and random variations in space and time,

it follows that the appropriate reliability expression is

of a probabilistic
generally nature. Most definitions of
reliability, of which the following is the standard form

approved by the IEEE, takes this aspect into account;

(IEE.STD352 (1972))

"Reliability is the characteristic of an item


expressed by the probability that it will
perform a required function under stated condi-
tion for stated period of time."

The word "item" can cover a wide range in its use, i.e., it

could mean a single pressure switch or a large system.

The first step in a reliability analysis, therefore, is

to ascertain the pattern of variation for all the relevant


17

REQUIRED
PERFORMANCE

. I I [ [ [[ ! L- I 1 1

CORRELATION RELIABILITY [
. . _
- [[ i
d

I I- [ 1 II

I
·I I
ACHIEVED
PERFORMANCE
I

l

Fig. 1.1: Overall reliability concept.


18

performance parameters of the system both from the point of

view of requirement as well as from likely achievement. Per-

formance variations may be due not only to the physical

attributes of the system and its environment but also the

basic concepts, ideas and theories which lie behind the

system's design. Having established all the appropriate

patterns of variations, the system should be rigorously

examined to check its ability to work in the required way or

fulfill the correct and safe overall function. To do so,


the system should be analyzed as to when and how it may fail.

Failures may be partial or catastrophic which results in

partial or complete loss of operation or performance.

Failures may affect individual components or may cascade from

one component to another. Each type of failure will generally

lead to a different chain of events and a different overall

result. In order to establish the appropriate probability

expressions for each chain of events, it is useful to convert

the system functional diagram into a logic sequence diagram.

There are two ways to develop the system logic diagram,

1) positive logic

2) negative logic

In the positive logic, the chain of events consists of

a series of operational components or sub-systems that con-

stitute successful operations whereas in the negative logic

the chain of events are composed of component or sub-system

failure. The positive logic is called reliability block


19

diagram which has been used extensively during early designs

of satellites. The negative logic has acquired different

names depending on the types of logic interconnections.

They are:

1) fault tree logic

2) event tree logic

3) cause consequence chart logic

Reliability analysis by using the method of fault tree is the

most well known analytic method in use today for studying the

failure modes of complex systems. Of course, event tree and


cause consequence chart have also been used, however by

proper rearrangement, one can argue that the cause-consequence

chart (ccc) can be mapped into event-tree and fault tree com-

binations and both can be mapped into a set of fault trees for

which the Top events are the consequences of the ccc or event

tree, Evans (1974), Taylor (1974).

1.2 Fault Tree and Reliability Analysis

According to Haasl (1965), the concept of Fault Tree

Analysis (FTA) was organized by Bell Telephone Laboratories as

a technique with which to perform a safety evaluation of the

Minuteman Launch Control System in 1961. Bell engineers


discovered that the method used to describe the flow of "cor-

rect" logic in data processing equipment could also be used


for analyzing the "false" logic which results from component
failure. Further, such a format was ideally suited to the
20

application of probability theory in order to numerically

define the critical fault modes.

From the time of its invention, FTA has been used

successfully to assess, qualitatively and/or quantitatively,

different safety studies involving complex systems. The

analysis begins with a well defined undesired event (failure

of the system to perform its intended function under given

condition) which is called the Top event. Undesired event

is identified either by inductive analysis or a failure mode

and effect analysis. In that, one proceeds to uncover the

various combination of 'simpler' events (i.e., sub-system and

component failure) whose occurrence quarantees the occurrence

of the Top event. The analysis stops at the level at which

the events are completely understood and quantitative informa-

tion regarding their frequency of occurrence is available.

These events are called the primary input of the tree. Figure

1.2 shows a typical structuring of a fault tree.

The reliability analysis in safety studies of nuclear

reactors using fault tree analysis was advocated in the early

1970's, Garrick (1970). In 1975, a unique and comprehensive

safety study on nuclear power plants using FTA was completed,

WASH 1400 (1975). In that study, the potential risk of nuclear

power plants to the public in the U.S. was estimated. Many

other countries have since used the same methodology as in

WASH 1400 to estimate their power plant risk, Ringot (1978).

Nuclear Engineering (1979). Today, fault tree is widely used


21

Segments of
Fault tree
..nlluete ualnmm levels

Top
structure
tl
d

System
Iphases I--
ntjvr
M.

system
levels
Fault
flows I

nt

gateb

Subsystem
and detailed
hardware flow

yC
aThe output of a
only if all the
bThe output of
inputs exist. .
LI -
'ut-of-tolerance failure of a system element -
failure due to excessive operational or environmental
stress.
dAn nhibit gate s a special case of the
ANDgate. The oval indicates a conditional event

Fig. 1.2: Typical structuring of a fault tree


(Barlow and Lambert (1975)).
22

as a principle structure for performing system reliability

calculations, i.e., risk analysis of advanced reactors

(Liquid Metal Fast Breeder, LMFBR), Burdick et al. (1977),

Waste Management Accidental Risk, Battelle (1976) etc. In

summary, fault tree analysis has been used extensively in

reliability analysis since its invention in 1961. Therefore,

in this study, we use fault tree to estimate the reliability

of the given system. Before entering the evaluation tech-

niques of fault tree, we refer the reader to the "Glossary of

Words" given at the end of this Chapter.

1.3 Fault Tree Evaluation

Fault tree evaluation refers to the analysis of the

system logic model provided by the fault tree. This analysis

can be either qualitative or quantitative or both.

1.3.1 Qualitative Analysis

Qualitative analysis is comprised of those analytical

procedures that do not require the assignment of probability

values to any of the events defined on the system logic.

Qualitative analysis tends to deal with the problem on a more

basic level and to identify fundamental relationships that can

be established from logic model without quantification. An

important aspect of qualitative analysis is to determine the

minimal cut sets for the Top event or some intermediate event.

The importance stems from the fact that determining minimal cut
sets is not only an analytic goal itself, i.e., finding all of

the fundamental ways that an event can occur, but it is a


23

required initial step in many other fault tree evaluation

techniques including quantification. Several computer pro-

grams have been designed to achieve this important goal. In

general, the codes have been developed in one of two ways. In

one development, the concept of Boolean Algebra has been

adapted -- Bennett (1975), Fussel et al. (1974b) --- i.e., SETS

by.Worrel (1975) and ELRAFT by Semanderes (1971), and in the

other the concepts of cut and path sets have evolved from

Coherent Structure Theory, Barlow, Proschan (1975), Vesely

(1970), i.e., PREP by Vesely and Narum (1970) and MOCUS by

Fussell, et al. (1974a)

Thus, a prime implicant of the Boolean function defined

by an equation of system behavior, Bennett (1975) and Henely

and Kumamoto (1978), corresponds to a minimal cut set obtained

from a system. structure function. Differences as well as

similarities occur in these approaches, but both are approp-

riate as long as an event and its complement cannot both occur

in each set.

Other codes in this category are: MODCUT (Modarres


(1979)), WAM-CUT (Erdmann (1978)), ALLCUTS (Van Sylke and

Griffing (1975)) and TREEL and MICSUP (Pande et al. (1975)).

1.3.2 Quantitative Analysis


As stated earlier, reliability analysis is of probablis-

tic nature therefore, a complete quantification of the system

is required in order to be able to assess a meaningful value

for the reliability of the system. In the fault tree analysis,


24

since the system structure logic is composed of a series of

negative (failure) logic, the term reliability is always

replaced by the term "unreliability." In a quantitative

sense, unreliability is a complement value of reliability. As

discussed in the previous section, generating the minimal cut

sets is the first step in any FTA. The second step in FTA is

to find the Top event unreliability by proper assignment of

probability values (data) to each basic events (components


failure).
The assignment of data described above will depend on the
type of results required. For example, if a point estimate of

the Top event failure probability is to be determined, then

the point estimates for the component failure probabilities

(or data allowing their computation) will have to be assigned.

Similarly, if a distribution is to be found for the Top event

unreliability, then one or more of the component characteris-

tics will have to be assigned in terms of a distribution.

Given the above data, the following quantitative evalua-

tions are generally useful in assessing a system reliability:

Numerical Probabilities: Probabilities of system and/or


component failure (point
estimate and/or time dependent

Quantitative Importance: Quantitative ranking of compo-


nents or cut sets to system
failure

Sensitivity Evaluation: Effect of change in models and


data, error bounding (multiple
25

point estimates, confidence


interval, distribution).

Having performed the above evaluations, a designer and/or a

user can compare the given systems and determine which will

probably be more dependent and have higher unreliability.

Hence, quantitative analysis is of great importance in

choosing the better system.

Several codes have been written for such evaluations.

Among them in FORTRAN are: KITT 1 and 2 (Vesely and Narum

(1970)), FRANTIC (Vesely and Goldberg (1977)), SAMPLE (WASH

1400 (1975)), LIMITS (Lee and Salem (1978)) and in PL/1 are:

PL-MODT and PL-MODMC by (Modarres (1979)). The above men-

tioned FORTRAN codes need minimal cut sets or related system

structure function as input, and each of them has certain

limitations in their applicability to different systems. For

example, although KITT is always coupled with PREP for its

minimal cut sets requirement, the coupled code is unable to

accommodate the periodically tested components as input and to

evaluate error bounds on the Top event. On the other hand, the

FRANTIC code can accommodate all types of monitoring situa-

tions, i.e., non-repairable, repairable, periodically tested

etc., but it requires the system structure function as input and


it is incapable of evaluating the error bound. In addition,

the approximations used in the code greatly overestimates the

system unreliability when the failure rates are greater than

10-3 per hour. Finally, SAMPLE and LIMITS can only be used for
26

error propagation given the average unreliability of each

event and require the system structure function as an input.

In the case of PL-MOD series codes, both codes should be used

to calculate all three aforementioned evaluations.

1.4 Objective of Thesis

To assess the reliability of different designs, one has

to perform a complete qualitative and quantitative analysis

of the respective designs. The probablistic values for the

comparison are the evaluations that were discussed in the

previous section. Several codes have been written by dif-

ferent authors to do such calculations, (see Section 1.3.1 and

1.3.2). Unfortunately, to do a complete job one has to use

two or three different codes depending on the type of results

desired. Therefore, the chance of making an error in the

preparation of the input data will greatly increase. Hence,


there is an incentive to prepare a single package composed of

different codes in order to minimize the confusion and error

that arise in using combination of the codes.

The objective of this thesis is to develop a code which

can evaluate the system unreliability (point estimate and/or

time dependent), find the quantitative importance of each

component involved, and calculate the error bound on the

system unreliability evaluation. To prepare such a code, the


methodologies used in REBIT (Wolf (1975)), KITT (Vesely (1970)),

FRANTIC (Vesely and Goldberg (1977)), SAMPLE (WASH 1400

(1975)), LIMITS (Lee and Salem (1978)) codes were implemented.


27

For generating minimal cut sets, a portion of REBIT which is

called BIT is used. BIT, at the beginning, was limited to 32

components and gates and only would accept AND or OR gates

logic. For this study, it has been modified to accept any

kind of gate logic and its capability has been increased at

the present time to 250 components and/or gates. However,

this capability could be increased to any number required.

For the quantification, a general and consistent set of mathe-

matical models for the component failure were developed


and
FRANTIC.code was restructured to accommodate these models.

For the components' analysis,


important the differentiating
method (the methods of Birnbaum) and Fussell-Vesely methods

(Lambert (1975)) were implemented. Finally, for error propa-

gation analysis, we developed a routine that allows the simu-

lation to be carried out on the component's failure parameters

(failure rate, repair rate, test rate, etc.) as well as on the

average components unreliability.

1.5 Structure of Thesis

Chapter 2 discusses the basic concept of the reliability

analysis. In particular, it will show how different mathemati-

cal models can be used in the calculation of the time dependent

reliability and/or unreliability of a system or component.

Chapter 3 explains the principle structure of the code

developed in this study to accommodate all the models discussed

in Chapter 2. The code calculates the unreliability of any

system given its fault tree and is called Unreliability


28

Analysis Code (UNRAC). In addition, it will discuss how the

method of moment matching will be helpful in approximating the

Top event distribution. Furthermore, two methods of accommo-

dating the dependencies in the fault tree will be discussed.

Chapter 4 presents a series of examples to insure the

effectiveness of models encoded and compares some of the

WASH 1400 results with the time dependent values calculated by

this study.

Finally, Chapter 5 contains the summary and conclusion of

this study and it will discuss the appropriate recommendation

for further studies.

1.6 Glossary of Words

In this section, we present a list of standard terminology

that is in use today for any fault tree analysis. Most of the

symbols and terminology defined herein are taken from IEEE-STD-

352 (1972).

1. Basic Event of Fault: Basic Fault event is an event

that requires no further development. In the fault

tree, Basic event is represented by a circle. The

probability of such event is derived from empirical

data or physics of failure analysis.


29

2. Faults Assumed Basic to the Given Fault Tree: This

type of event is shown by a diamond in a given fault

tree. The possible causes of this event are not

developed either because it is not required or the

necessary information is unavailable.

3. Resultant Events: The rectangle represents an


event that results from the combination of events of

the type described above through the input of a logic

gate.

L I1
4, AND Gate: AND gate describes the logical operation

whereby the coexistence of all input events is

required to produce the output event. In other words,


an output event occurs if and only if all the input

occurs. OUTPUT

I INPUT

S. OR Gate: OR gate defines the situation whereby the


output event will exist if one or more of the input

event exists. That is, an output event occurs if one


30

or more of the input occur.


$ OUTPUT

INPUT

6. Exclusive OR (EOR) Gate: This gate functions as an

OR gate with the restriction that specified inputs

cannot coexist.
( OUTPUT

ir INPUT

7. Transfer Gates: The transfer symbol provides a tool

to avoid repeating sections of the fault tree. The

"Transfer out" gate represents the full branch that

follows it. It is represented by a symbol, say b,

and indicates that the branch is repeated somewhere

else. The "Transfer in" gate represents the branch

(in this case b) that is already drawn somewhere

else, and instead of drawing it again, it is simply

input at that point. The transfer gate is shown by a

triangle in the fault tree.

TRANSFER: f u

8. Failure: The termination of the ability of an item to

perform its required function. In general, there are

two types of failure:


31

a) Revealed (announced) failure. A failure of an


item which is automatically brought to light on
its own occurrence.

b) Unrevealed failure. A failure of an item which


remains hidden until revealed by some thorough
proof-testing procedure.

9. Failure Rate: The expected number of failures of a

given type, per item, in a given time interval.

10. Mutually Exclusive Events: Events that cannot exist

simultaneously.

11. Repair Rate: The expected number of repair actions

of a given type completed per unit of time.

12. Test Interval: The elapsed time between the initia-

tion of identical tests on a same item.

13. Test Schedule: The pattern or testing applied to the

parts of a system. In general, there are two pat-

terns of itnerest:

a) Simultaneous: Redundant items are tested at the


beginning of each test interval, one immediately
following the other.

b) Perfect Staggered: Redundant items are tested


such that the test interval is divided into equal
sub-intervals.
32

14. Logic Diagram: A diagram shown in a symbolic way

the flow and/or processing of information.

15. Primary Failure: Basic event failure is called

Primary failure.

16. Cut Set: is the set of the basic event whose simul-

taneous failure guarantees the top event failure.

17. Path Sets: is the set of the basic event whose

success guarantees the system success.

18. Minimal Cut Set (Mode failure, Critical Path): is

the smallest set of primary failures such that if all

these primary failures simultaneously exist, then the

Top event failure exists.

19. Secondary Failure: is a failure due to excessive

environmental or operational stress placed on the

system element.

20. Confidence Level: the probability that the asser-

tion made about the value of the distribution

parameter, from the estimates of the parameter, is

true.
II
33

CHAPTER 2: ON THE ESSENTIAL OF RELIABILITY ~.


PROBABI LITY THEORY

2.1 Introduction and Basic Concepts

System reliability is a measure of how well a system

performs or meets its design objective. If successful opera-

tion is desired for a specified period of time, reliability

is defined as the probability that the system will perform

satisfactorily for the required time interval.

Historically, reliability has usually been considered

during system design, but it acquired more attention first

in radar design in early 1940 and then in design of the

Minuteman and satellite systems. Most of the mathematical

models used to help better understand system reliability have

been studied by the pioneers in this field and their applica-

tions have been tested during the aforementioned designs.

This work is an effort to further develop the analytical

methods that have been established earlier by such authors as:

Cramer (1946), Feller (1957), Parzen (1960), Cox (1963) and

Takacs (1957).

Therefore, in this Chapter, we discuss the essentials of

reliability and probability theory that will be used in

developing the model in succeeding Chapters. To do this, we

first define the measure(s) of system effectiveness and then

discuss the appropriate theory and mathematical model to

determine these measures.


34

In general, the measures of system effectiveness are

criteria by which alternate design policies can be compared

and judged, i.e., alternate design policies for an Emergency

Safety feature may be judged on the probability of actuation

detection and/or instantaneous availability of the system.

However, there are many problems associated with developing

measures of effectiveness. In the first place, it is not

always possible to express quantitatively all the factors of

interest in a model. Secondly, the measures of the system's

effectiveness can change with time. What may be effective

today may not be tomorrow due to technological and political

innovations. Today there are four major measures of

system reliability effectiveness. These are:

1) Availability

This measure is applicable to maintained systems.

There are several measures of availability in use

today, which we can categorize as:

a) Instantaneous Availability: The probability


that the systems will be available at any random
time t.

b) Average Up-Time: The proportion of time in a


specified interval (O,T) that the system is
available for use.

c) Steady-StateAvailabiltiy: The proportion of


time the system is available for use when the
time interval considered is very large.
35

In the limit, a) and b) approach the steady-state

availabiltiy c). Which measure to choose depends

upon the system mission and its condition of use.

2) Probability of Survival
The probability of system survival is a measure of

the probability that a system will not reach a com-

pletely failed state during a given time interval,

given that at the beginning of the interval the

system was in a fully operable state. The approp-

riate use for the probability of survival as a

measure of system reliability effectiveness occurs

in those systems where maintenance is either not

possible during operation or can only be performed

at defined times, i.e., equipment in the reactor

building. This measure is highly dependent upon

system configuration and the condition of maintenance.

In general we would expect a doubly redundant

system to have a higher survival probability than a

non-redundant system. Also, we should expect to have

a higher survival probability where repair on a

redundant system is being performed immediately upon

the inception of the failure of one of the parallel

equipment trains, than where repair is not begun

until all equipment reaches a failed state. We shall

henceforth denote the latter case as a non-maintained

and the former case as a maintained system.


36

3) Mean Time to System Failure

The literature on reliability contains such terms as

Mean-Time-Between-Failure (MTBF), Mean-Time-To-

Failure (MTTF) and Mean-Time-To-First-Failure (MTTFF).


These terms do not describe the same thing. MTBF is
specifically applicable to a population of equipment

where we are concerned with the average time between

the individual equipment failures. When we are

interested in one component or system, the MTTF and

MTTFF are applicable. The difference between MTTFF

and MTTF is the specification of the initial condi-

tion and how time is counted. The difference becomes

apparent when dealing with redundant systems under

different maintenance conditions. For non-maintained

system, MTTF is a measure of expected time the system

is in an operable state before al-1l


the equipment

reaches a failed state and the time is counted from

initial fully operable to final failed state. For a

maintained system, MTTFF is a measure of the expected

time the system is in an operable state given that

all equipment was initially operable at time zero.

4) Duration of Single Downtime


For some systems, the duration of a single downtime
may be the most meaningful measure of system relia-

bility effectiveness. For example, an air traffic


control system which establishes flight plans and
37

directs landing and take-off, a long duration of

single downtime may mean queuing at take-off,

schedule delays, lost flights and so on. In reactor

safety, long downtime of an emergency system may

require the shut down of the reactor.

In selecting a measure of effectiveness, we should keep

in mind that the measure will have little value if one is

unable to express it quantitatively in order to compare

between alternate designs. To be able to express these mea-

sures quantitatively, we need to model each equipment or

component of a system according to its maintenance policy.

Each component can come under the following two major cate-

gories:

1) non-maintained

2) maintained.

In one approach, Vesely and Goldberg (1977), the compo-

nents are classified according to the type of failure rates

and the manner in which repair policies are executed. The

Vesely and Goldberg (1977) classification is as follows:

Class 1: Component with constant unavailability

Class 2: Component with unrepairable policy (non-


maintained)
Class 3: Component with repair in which the faults are
immediately detected (maintained)
Class 4: Component with repair in which the faults are
detected upon inspection, i.e., periodically
38

maintained component (maintained).

As we can see, there is not much difference between the

above classification and the two major repair policies.

Therefore, in the following sections we discuss the mathemati-

cal models for non-maintained and maintained components.

2.2 Reliability Model of Non-Maintained Systems

When a fixed number, N, of identical systems are

repeatedly operated for t amount of time, there will be Ns(t)

systems that survive the test and NF(t) that fail. The

reliability, R(t), of such a collection of systems can be

expressed as:

R(t) = Ns(t)Ns(t)
+ NF(t) (2.1)

or equivalently

R(t) = 1 - NF(t)(2.2)
N

letting N be fixed we can write

dNF(t) = N dR(t)23)
- (2.3)

where dNF(t) is the rate at which systems fail. Now if we divide

both sides of Eqn. (2.3) by Ns(t), and denote the left hand side
by r(t)

1 dNF(t) N
Ns dR(t)
dRt t
r(t) Nst)
dt
NF(t)
dit (2.4)
39

then r(t) is the instantaneous probability of failure. By

simple manipulation of Eqn. (2.4) we get

R(t) = exp (- r(x) dx) (2.5)


O

The function r(t) is called the failure rate function, hazard

function or hazard rate.

A study of many systems (Mechanical, Electronic, (Davis

(1952)), during their normal life expectancy has led to the

conclusion that their failure rate follows a certain basic

pattern. (See Fig. 2.1)

It has been found that systems exhibit a high failure

rate during their initial period of operation. These are

failures which are typically caused by manufacturing flaws and

this initial period is called the infant mortality or break

in period. The operation period that follows infant mortality


has a smaller failure rate which remains approximately con-

stant until the beginning of the Wear-out period. Most of

the components in use are considered to have constant failure

rates and the failures generally result from severe, unpre-

dictable and usually unavoidable stresses that arise from

environmental factors such as vibration, temperature shock,

etc. Therefore, in this study we assume a constant failure

rate r(t) = A, for any component. Hence from Eqn. (2.5) we


have

RCt) = exp (-Xt) (2.6)

A_
40

S.

I
I -. I
S. I
!
I-
(U I

or Infant
Mortality I Operating Period i
I
Period I
I
I
I I

Time

Fig. 2.1: Typical failure rate behavior.


41

and if we assume that the system or the component under study

has a failure density function of f(t) and failure distribu-

tion function F(t) we have

f(t) = dF( = - dR = exp(-Xt)

where f(t) is an exponential failure time density function.

Therefore, for a non-maintained component or system, the

reliability function is given by Eqn. (2.6) and unreliability

by:

F(t) = 1 - R(t) = 1 - exp(-Xt) (2.7)

and the mean time to failure (MTTF) is given by:

MTTF =j tf(t)dt = 1/A

2.3 Reliability Model of Maintained Systems

2.3.1 Introduction

When we allow repair during the desired interval of

operation, the concept of reliability is usually replaced by

the concept of instantaneous availability which is defined as

the probability that the system is in an acceptable state at

any time, t, given that the system was fully operable at time

t = O. Another important measure in a maintained system is

system downtime, which is defined as the total time from


42

system failure until return to service. Service or repair

downtime is generally divided into two basic categories:

1) Active repair time


2) Administration time

Active repair time can be divided into recognition or

detection time, fault location or diagnosis time, correction

or repair time and verification or final malfunction check

time. Administrative contribution to downtime is that time

required to obtain a spare part, or is spent in waiting for

maintenance crew, tools or test equipment. The time required

to perform each of these categories varies statistically from

one failure instance to another and there usually exists a

large number of relatively short-time and a small number of

long-time repair periods. However, in modeling a maintained

system, the repair downtime is frequently represented by one

of the following probability density functions:

.1) exponential

2) log normal

3) gamma

In this study we assume g(t) to represent the general

repair time density function, the mean downtime or Mean-Time-

To-Repair (MTTR) is denoted by:

0tgt)dt (2.8)
43

and the probability that a failed system returns to service by

some time T is given by:

Pr {downtime < T = f g(t)dt (2.9)

For evaluating the system unavailability one can use the

following two methods:

1) differentiating method applicable for simple failure


and repair distribution

2) Semi-Markovian method for more complex distribution


and system.

In this section, we discuss the differentiating method

for a two-state system with failure and repair time density

functions. At any time, t, the system is in one of the

following states with probability, say, Pi(t), i = 1,2.

State 1: The system is up

State 2: The system is down

g(t) = pe P t repair distribution where . is the


repair rate.

If the system is in state 1 at time t, the probability

that the system does not go to state 2 before t + At is equal

to the reliability of the system and is:

e-XAt =1 - XAt +O (At),


44

where O(At) is a quantity having the property that

lim O(At) = 0. If the system is in state 2 at time t,


AT+O At
the probability that the system remains in state 2 before

t + At is equal to (from Eqn. (2.9)) e - PAt = 1- pAt+ O(At).

Therefore, in order for the system to be in state 1 at some

time t + At, either it must be in state 1 at time t and no

failure occurs in the interval (t, t+ At), or it is in state 2

at time t and it is repaired during (t, t+ At). Thus:

Pl(t+ At) = Pl(t)[1- At] + P 2 (t) pAt + O(At)

or P 1 (t+ At) - P1 (t)


== PP2(t)
P 2 (t) - XPl(t)
A~l~t) +
+ O(At)
At)
At

letting At + 0

Pl(t) - P 2 (t) - XP l (t) (2.10)

where

P dPl(t)
Pl (t ) dt

But we know that Pl(t) + P 2 (t) = 1

'*. Pl(t) = - (X + )Pl(t) (2.11)


45

Eqn. (2.11) is a simple differential equation with initial

conditions of Pl(o) = 1, P2 (o) = 0, Hence

X PI~
Pl(t) = T-;-+ T exp[-(X+p)t]

The instantaneous availability of the system is equal to

Pl(t):

A(t) = Pl(t) --- + exp[-(X+p)t] (2.12)

The term steady state availability is used when the

mission period is much larger than MTTR.

ASs(t) = A(-) P MT-MTTF (2.13)


p+X MTTR+ MTTF

Note: In the above discussion we assume the unit (system) is


as good as new after repair or replacement.

In a more general case, the method of repair as a renewal

process has been used by Barlow and Hunter (1961), Cox (1963),

Takacs (1959) to evaluate the up and down state probabilities

of a unit (system) by developing a detailed mathematical model

to inter-relate failure and repair distributions to the

probabilities. To prevent duplication of the method, we only

show the modified result of state and transition probabilities

for a two-state system. Let F(t) be the failure probability


distribution and Gt) be the repair probability distribution.
By vitue of the state diagram shown in Fig. 2.2 one can write:

(for proof see Rau (1970)).


46

F (t)

G(t)

state 1: is Up State
state 2: is DownState
F(t): Distribution
Failure Probability
G(t): Repair Probability
Distribution

Fig. 2.2: A simple two-state transition diagram.


47

p* = F*(s) [l-G*(s)G (2.14)


12(s) S[1- G*(s)F*(s)]

G*(s) [-F*(S)] (2.15)

P*2(s)=' 1- F*(s) (2.16)


P1 (s)- S[1- G*(s)F*(s)]

Pe 2 ()
P2 (s) s[rI1
Si - -G(s)F*(s)]
-= G*(s) (2.17)

where K*(s) is the Laplace transform of K(t), K=f,g


Pi (s) is the Laplace transform of Pij(t), i,j=1,2

Pij(t) = Pr .system is in state j at time t given in


state i at time 01

If we assume exponential distribution for both failure

and repair, the inverse transform of equation (2.16) will

yield Eqn. 2.12).

If f(t) = e Xt and g(t) = (constant repair) then one

can show

[t/] xj(t- j)i e-A( t - [j )


P11 (t) [2.18]
j=o jl

where [t/B] is the largest integer less than or equal to

(t/B).

One should note that it is not always easy to find the


inverse transform of P(s) except for special cases (which
we
mentioned
above)
and
typically one must use numerical
we mentioned above) and typically one must use numerical
48

technique to find Pij(t).

2.3.2 Semi-Markovian Model

In the last section, we discussed a method which is only

applicable to a two-state system. In general, a system is

comprised of different components and each component is subject

to a certain inspection-repair-replacement policy. Therefore,


a more general model is needed to handle a complex system.

One of the models which has been used successfully in the past

two decades is the Markovian model. The application of this

model in reliability analysis can be found in numerous ref-

erences, some recent ones being: Kemeny and Snell (1960),

Barlow and Proschan (1965), Shooman (1968), Green and Bourne


(1972). In this section, we utilize the model discussed by

Howard (1972). Basically, Markovian models are functions of

two random variables; the state of the system i and the time

of observation t. A discrete parameter stochastic process

{X(t); t=0,1,2,...} or a continuous parameter process

{x(t); t>0} is said to be a Markov process if, for any set of

n time points, t, where tl<t 2 <... tn7 in the index set of the

process and any real numbers, xl,x 2,...xn;

X(tn )
Pr[X(tn XnIX(t 1) = x1 X(-n-1 ) = Xn-l]
(2.19)
Pr[X(t) < XnlX(tn l)I= Xnl]
49

P1 1

Fig. 2 : A typical transition diagram.


50

Central to the theory of Markov process models are the

concepts of state and state transition. In reliability

analysis, a state can often be defined simply by listing the

components of a system which are working satisfactorily. And,


in general, the number of distinguished states depends on the

number and function of the system equipment. In the model we


will always consider a finite number of states.

The manner in which the states are interchanged in the

course of time is called transition. As we can see from the

general Markov process, equation (2.19), only the last state

occupied by the process is relevant in determining its future

behavior. Thus, the probability of making a transition to

each state of the process depends only on the state presently

occupied. If we denote Pij to be the transition probability,

by definition we have:

= =
Pij Pr{S(n+l) = jS(n) i} l< i,jn (2.20)

n=0,1,2, ...

and it is defined as the probability that a process presently

in state (S(.)) i will occupy state j after its next transi-

tion, see Fig. 2.3.


Another quantity which is always needed in dealing with

a Markov process is the state transition probability denoted

by ij(n) and is defined to be the probability that the process


will occupy state j at time n given that it occupied state i
51

at time O, or

ij(n) = Pr{S(n) = jS(O) = i} l<i,j<N (2.21)

n=0,1,2,...

Both Pij and ij(n) have the following properties:

1) O <ij(n), Pij< I lvij<N

N (2.22)
2) . *ij(n) 1 i,j=1,2...N
~j n=0,1,2...
N
E Pij - 1
j=l

In the above equations N is the total number of states

and n is the number of transition steps in time.

Having defined Markov processes, we can relate the afore-


mentioned definitions to the continuous time Semi-Markov

process. The Semi-Markov process is a process whose succes-

sive state occupancies are governed by the transition

probability of a Markov process, but whose stay in any state

is described by a positive random variable, Tij, that depends

on the state presently occupied, and the state to which the

next transition will be made, j. The quantity Tij is called

holding time and has a holding time density function hij(t).

One can also define an unconditional waiting time density

function of state wi(t) by


52

N
w i (t) = 1 pij h i (t) (2 . 23)
jl P 3

for the cases when we do not know the successor state. By


virtue of equations (2.20), (2.21), and (2.23) one can write:

Oij(t)= -ij ccw (t)+Nk= i k


JO
dz hik(T)kj (t2-T)
(2. 24)

i,j=l,2...,N t>i

Where

Pij(t) = is the probability that the process will


occupystate j at time t given that it
entered state i at time 0.

6ij = Kronecker delta = { i=j

ccwi(t) = cumulative complementary distribution of


wi(.) and is theprobability that the process
will leave its starting state i at time
greater than t.

The physical meaning of terms in Eqn. (2.24) can be defined as:

first term = Probability that the process will end up in


state j by one step during (0O,t)
second term = Probability that the process makes its first
transition from the starting state i to some
state k at some time T,DO<<t, and then
proceeds somehow from state k to state j in
the remaining time t- T. This probability
is summed over all states k and time 0 to t.
53

One can solve the Equation (2.24) by two methods:

1) Method of matrix manipulation

2) Method of flow graph theory

In this section we discuss the first method. If we take

the Laplace transform of equation (2.24) we will have:

cc N
ij (s) = 6ij wi(s) + Pik hik(s) 'kj (S) (2.25a)
k=l

and in matrix form

0(s) = [I-pOH(s)]-l CcW(s) (2.25b)

Where:

(s) - (4ij(s)) I= identity Matrix

= (Pij)
H(s) = (hij (s))
CCw(s) = (CC(s))

The right hand side of equation (2.25) consists of known

functions of s and one can find (s) analytically or numeri-

cally depending on the degree of complexity of the holding

time distributions. To demonstrate the application of

equation 2.25) we consider a general two-state unit with

holding distribution and transition probabilities shown in

the following figure.


54

P1 2 h1 2(t)

By comparison with Fig. 2.2 we have:

h 1 2 (t) = f(t)

h 2 1 (t) = g(t)

and in addition

cc t
w(t) = hl2(t), W = 1 - h1 2 (t)dt
0
N = 2

P = 10

Then by simple manipulation we find,

for example, i = 1, j = 1,

CC
wi(s) l-h 12 (s)
11 (s) = 1-h 12(s) h(s)
212 s[l-h 12 (s) h21 (s)]

replacing h1 2 (s) and h 2 1(s) with F*(s) and G*(s) we get

$11(s) = [- F*(s)
S[1-F(s) G*(s)]

which is the same as equation (2.16). Hence by using


equation (2.24) we can generate our probability function and

find the respective availability. For further examples we


refer the reader to Howard (1972).
55

2.3.3 Flow Graph and Mason's Rule

When dealing with a set of algebraic equations such as:

allXl+al2X2+al3+X3+a4X 4 - b1

a21Xl+a 22X2+a23X3+a24X4 = b 2

a31Xl+a32X2+a3X3+a 34X 4 = b3 (2.26)

a41Xl+a42XZ+a43X3+a a44X = b4

The a's may be regarded as system coefficients, the b's as


input and x's as output. If the equations are solvable, they

can be rearranged such that one output variable with a coef-

ficient of unity is separated from each equation. In set

(2.26) above let us solve the first for X 1, the second for X 2

etc.

1 al1 (bl-a 1 2 X 2 -aX1 3 3 -a 1 4 X 4 )


11

2 a22 (b2 -a2 1 Xl-a2 3X 3 -a2 4X 4 )


22 (2.27)

x3 a3 (b3'a3Xl-a
1 3 2X2 -a 34 X4 )
33

X4 a44 (b 4 -a 4 1 Xl-a 4 2 x 2 -a 4 3 X 4 )

Figures 2.4 and 2.5 show two different flow graph arrangements

equivalence of Eqn. (2.27). As we can see, both flow graphs

are cluttered to the point of incomprehension. These types of

flow graphs are typical when one uses the Markovian model to

calculate the system reliability. Therefore, if the systems


56

XA X4

Fig. 2.4:

!
Flow graph of Eqn. (2.27), (Henley
and Williams (1973)).
, ask

Fig. 2.5: Flow graph of Eqn. (2.27)(Henley


and Williams (1973)).
57

are large, it is difficult to extract information from graphs.

Two potential solutions to this problem are (1) eliminate

the graphs and use their Matrix equivalence, or (2) reduce

the graphs by combining arcs and nodes (i.e., transition and

states). Here we discuss the flow graph reduction technique.

Flow graph reduction is a technique whereby a complex

graph is reduced to a simpler, yet equivalent, form called a

residual graph. The reduction process involves the applica-

tion of a set of rules by which nodes are absorbed and branches

are combined to form new branches having different gains (i.e.,

different transition probabilities). By repeatedly applying

the reduction rules, a complex flow graph will be reduced to

one having the degree of complexity desired. If a graph is

completely reduced, the result is a residual graph in which

one input node is connected to a selected output node by a

single link. Thus, graph reduction can be viewed as a way

of solving sets of linear equations.

The technique of reducing graphs is generally applicable

only in the case of flow graphs where a limited number of

output nodes (i.e., probabilities of state occupations) are of

interest. During the reduction process, some of the nodes are

necessarily absorbed and thereby lose their identity. This is


a serious handicap in applications where the preservation of

intermediate node is of importance. Overcoming this handicap


requires that the graph be repeatedly reduced, preserving a set

of selected nodes each time.


58

In flow graph language, the value of an unknown in a

linear system can be expressed as:

n
xi = Z (T )(input). (2.28)
j=1 j-J-i

Where Xi is the ith output, n is the number of input, and

Tj- i is the transmittance from input node j to output node i.

The quantity, Tj i can be thought of as the ratio of

T = contribution of input j to output i (2.29)


input j

An ingenious technique for evaluating the transmittance T

from a specified input node to an output node in a graph based

on the formulation represented by equation (2.27) was devised

by Mason (1957). The technique is commonly referred to as

Mason's rules or "The 'Gee Whiz' Flow Graph Reduction Tech-

nique" and is expressed as:

T ( Z PkAk)/A (2.30)
all k

The summation in the above equation is over all of the paths

leading from an input node to an output node. The quantities

Pk and Ak are associated with the kth Path. The quantity A


has a value equal to the value of the determinant of the system
of equations (2.26).
59

The following steps are usually useful in finding the

transmittance T:

(i) Identify "Simple Loop". A Simple Loop

is a closed cycle in the graph which has no

proper sub-cycle (touches no node more than

once).

(ii) Evaluate "Loop Products of Simple Loops!'. The


loop,product of a simple loop is the negative

product of the transmission of arcs of the loop.

(iii) Identify "Multiple Loops". A multiple Loop is

any combination of simple loops that have no

nodes in common.

(iv) Evaluate loop products of multiple loops. The

loop product of a multiple loop is the product

of the loop products of its simple loops.

(v) Calculate A = 1+ sum of all loop products.

(vi) Identify all "paths". A path is a directed set

of connected arcs with no cycle from input to the

output node.

(vii) Calculate Pk' the product of the arc transmission

for each path k.

(viii) Calculate A k = 1 + sum of the loop products of all

loops sharing no node with path k for every path.


60

(ix) Find transmittance T from Eqno (2.30).

To clarify the above procedure, we present the following

example. Let us assume we want to find the transmittance

T of the following figure:


1+3

1 1

e
Here we refer to the steps as we proceed to find T
1+3

(i) Simple loops are, 2-2, 1-3-1, 1-2-3-1.

(ii) With loop products of -e, -ab, -cbd


respectively.

(iii) The only multiple loop consists of 2-2 and


1-3-1.

(iv) With loop product of eab, then

(v) A = l-e-ab-cdb+eab

(vi) The paths from 1 to 3 are: 1-3 and 1-2-3

(vii) With P 1 = al, P 2 = cd respectively

(viii) Then we calculate A 1 = 1-3, and A 2 = 1 to find

, a(l-e) + cd
(ix) T
1+3 1-e-ab -cdb+eab
61

Having discussed Mason's rule in flow graph reduction,

we can relate Eqn. (2.25) to a flow graph. Equation (2.25)

has a flow graph interpretation. In order to make it more

realizable, let us rewrite Eqn. (2.25b) in the following

form:

D(s)[CcW(s)]-l [I -Po H(S)] = I (2.31)

Then we can write

N 0
ik
cc [ 6 kj - Pkj hkj()] = 6ij l<i,jn
k=l wk k

and by proper rearrangement we get

pi() =N ik(s)
ccwis) j k=l k(j (s) (2.32)

for fixed i, the above equation defines N linear equations

with N unknowns. If we interpret ij(s)/CCwj(s) as the signal

at node j and assign transmission Pij hij(s) to the arc from

node i to node j, then ij (s) is the product of CCwj(s) and

effective transmission node i to node j, see Fig. 26a.

Figure 2.6b shows the partial flow graph for fixed i, as we

can see for each node j we have an output tap with transmission
Ccwj(s). Finally it is worth mentioning that ij(s)/cwj(s)
J j

and T have the same definition (.SeeSection 3.3.1).


i-+j
62

1 CCw(s)
(s) = = [I - PoH(s)]-CCW(s)

PvH(s)

a. Matrix flow graph for the continuous


time Semi-Markov process.

Pik hik(s)
ik s ( )

ccwi
Wi

Pij hij (s)

T
1J '** -

b. Partial flow graph for the continuous


time Semi-Markov process.

Fig. 2.6: Flow graph representation for the


continuous time Semi-Markovian process.
63

2.4 Reliability Model of Periodically Maintained Systems

In Section 2.1, we discussed that one way to enhance the

reliability (availability) of a system is to increase its

redundancy. Stand-by systems are another form of redundancy.

They are often added to a system of critical importance so as

to ensure the safe operation of the main plant. For example,

such stand-by systems are widely used in the reactors as

emergency safety systems. These systems are rarely called

upon to operate, but their availabilities are critical. There-

fore, each system is monitored periodically in order to

maximize its availability.

This class of systems cannot be analyzed by any of the

previously discussed models because the failure cannot be

detected when it actually occurs and because repair is not

performed in a random manner; however, repair is allowed when

the failure is detected. Vesely and Goldberg (1977), and

Caldarola (1977) have developed two different techniques for

handling this class of components. Caldarola assumes that a

component undergoing periodic testing is repaired in a constant

interval and becomes as good as new, while Vesely and Goldberg

use the same repair policy, but they do not strictly enforce

the 'as good as new' model after repair. In this study a

generalized formulation is developed that accommodates dif-

ferent repair policies.

Figure 2.7 shows a typical time dependent unreliability of

a periodically tested component with a constant repair time of


A
64

r'

.r

0
.,

0d

,0

et-
Cd

434 o
rI Pot
e U)

43

ra

c' "4
431

r4q

AI IHVI1H3Nfn
65

TR. As we can see, the test interval is divided into three

periods:

1) test period duration, Tc (hours),

tn t t n * Tc
2) repair period duration, TR (hours),

tn + T c < t < t + T c TR

3) between the test (stand-by) period duration,


T2 - Tc - TR (hours),

tn + Tc + TR t tn+l

In order to predict the unreliability of a periodically

tested component one has to account for the following contri-

butions:

A) Test period contribution


The system may be unavailable during the inspection

period for either of the following causes:

1) The system is unable to override the test with


probability q should an emergency signal require
its operation.

2) System has failed before the test with proba-


bility Q.

Therefore, the unreliability contribution during this


period will be;

R(t) = Q + (1- Q)qo; tn<t<tn+Tc


n- - +c
66

where

Q R(t)
(2.33)
t = tn

B) Repair and between the test (stand-by) period contri-


butions, tn+Tc <t<tn+l

Since we are interested in a general repair policy,

we cannot differentiate between the repair and stand-

by period contributions as we did for our typical

example. Here, we consider a general repair time

distribution function, g(t), for the repair policy.

The system is unavailable at time t after the test


if:

1) The system was up at time 0 (at the end of test


time) and it failed at t randomly according to
failure time density function f(t), or

2) The system was down at time 0 and repaired with


the following possibilities:

a) it is still under repair = 1- G(t)

b) it was repaired at time T, (0<T<t), but


failed again at t with probability =

o G(-)
t
f(t
-)dT
NOTE: in part b) we assume the system will be
repaired once per interval.
67

Therefore we can write:

R(t) = aF(t) + [1 - G(t) + G(T)f(t-T)dT] (2.34)

Where
a = the probability that the system was up after

the test (reference time: t = 0)

= 1 - a = the probability that system is down

at time zero

F(t) = f(t)dt

G(t) = g(t)dt

In the above equation is the sum of two different pro-

babilities that have happened prior to the start of this cycle.

In other words, if the above equation is written for the time

interval tn to tn+l, the value of is related to the unreli-

ability outcome of tn-l to tn. Hence, for generalizing the


Eqn. (2.34) we start at the first test interval t = t1 where

the unreliability is:

Rl(t) = F(t)

and

Q1 R 1 (t) F(tl) = F 1 (2.35)


t = tI
The system will be down after the first test with probability

B1 because either it has failed during the test or failed

prior to test. Therefore:


68

= Pf + (1 - Pf)Q 1 = Pf + PfQ 1 (2.35a)


1

where:

Pf = probability that the test caused the failure.

Consequently, by writing the successive Qi and .i we can write

= (1 -
Q 1 )F + 1 J2 c G(T)f(t - T)dT
=
Q2 -(1
B!)F + B

2 - T)]
+ 61[- G(T (2.35b)

However, the third term of the above equation is almost

zero because T 2, in practical cases, is much larger than the

average repair time, and if we replace the integral term by

Y we have:

Q2 = (1 - 1)F + Y

Substituting for 81 from Eqn. (2.35a), and rearranging

the above equation, we get:

Q2 = X - Q 1 [X - PfY] + PfY (2.35c)

82 = Pf + PfQ 2

Q3 = - xIX - fY] + Q[X - fy] - PfY[X - PfY] + PfY


* 9 9

S.

* s0
-
1 C-1 N-1
I
pyN-l
- P fY]
QN (X + PfY)
1 + X -Y

1 )N
+ (.- Q [X - pfy]N-1 (2.36)
69

(2.36a)
BN = Pf + PfQN

Where:

F = F(t)I
=
t T2 Tc

X = fF

N = number of cycle

The general unreliability equation for time interval tn


to t is:

n+
%(t) (1 -i s,.1 )F(t) + 1 - G(t) + G(T)f(ft
- )dT]

0 < t < T - Tc (2.37)


_ - 2

Usually the quantity, [X - PfY] is very small and the

value of QN quickly reaches its asymptotic value which is

given by:

X + PfY
Q. (2.38)
1 + X - fY

R (t) F(t) + [

I[1 - G(t) - F(t) +


1f +-
The asymptotic unreliability equation will then read:

X + PY
+fY
fY
f

f
+ Pf]

G(T)f(t - T)dT]

0 < t < T2 - T c (2.39)


70

To show the applicability of the above equation, we give

the following examples and compare the results with Vesely

et al. (1977) and Caldarola (1977) for constant repair and

exponential failure distributions.

First, let us show the above referenced equations (for

detailed information see the references):

a) Vesely et al. (1977). For the case where

Pf = 0 and q = 1

1) R(t) = (t - Tc) TR + Tc < t < T 2 (2.40)

2) R(t) - x(T2 -Tc) + (1 - 1 XTR


(T2 - TC))

Tc < t < TR + Tc (2.41)

3) R(t) 1 0 < t < Tc

b) Caldarola (1977).

1) R(t) = l(t) - l(t - Tc) + F(t - T c)

F(T 2 - T)
1 + F(T 2 - Tc) - F(T 2 - Tc - TR)

[l(t - Tc) - l(t - Tc - TR) - F(t - T )

+ F(t - Tc - TR)]

< t < T2 (2.42)

2) Approximated equation used in PL-MODT(Modarres


(1979)). (2.43)

R (t)t) - 1- [1 exp(-
exp[- () ]]
t approx eff P c
71

+R2
where

2T T r(
- )
T2 T T 2 - Tc
+ (1 - )
2 2

q = n( - n(XTc))

.r(·) = Gamma function

The significance of q is: it must be large enough to


causek(t)lapprox to become:.
Iapp
rox

= 1 - exp(- efft) for t > Tc


. (t)approx

Now, let us find our equation for constant repair and

exponential failure distribution.

t < TR
G(t) = |
t > TR

f(t) = Ae t, A is failure rate.

For consistency with the Eqn. (2.42), we change the

reference time in Eqn. (2.39) to the start of the cycle:


72

R (t - T) = (1 - )F(t - T) + [1 - G(t - Tc)

+ ft ' Tc G(T)f(t
-T - T)dT] (2.44)
0

By replacing G(.), ,, and taking into account that

Pf = 0 and

Y = G(T)f(t - Tc - )dT
f0t-Tc

_fToT f(t - ')dt' = F(T 2 - Tc - TR)

where T' = Tc + T

Equation (2.44) will read:

F(T - T )
R.(t - Tc) = F(t - Tc) +
1 + F(T 2 - Tc) - F(T 2 - Tc - TR)

[1 - G(t - Tc) - F(t - Tc) + F(t - Tc - TR)]

(2.45)
In the cases where [F(T2 - Tc) - F(T 2 - Tc - TR)] is small
we can find an approximate formulation from the above

equation for t in the intervals of [Tc, T + TR] and

fTc + TR' T2], where TR is average repair time.


73

For T c < t< T + TR we can write:

= F(t - Tc)+ F(T2 - Tc)[1 - F(t - T)]


-t)lapprox
(2.46)

Because G(t - TR), F(t - Tc - TR) are both zero when their

arguments are less than or equal to zero, then we can find the

average unreliability over TR.

I avg
1=

R~
-
= T-[ ( - F(T22 - Tc))
( T))
f (t)dt
F(t)dt

JTR
fO F(T2 - TC)dt]

Replacing for F(.) and approximating the exponential term


x2
(i.e., e = 1 - x + A- where x < 0.01) we get

filavg = [1
- F(T2 - Tc)]½ ATR + F(T2 - Tc) (2.47)

For Tc + TR < t < T2 we can write:

R (t) ap F(t - Tc) + F(T2 - Tc)[F(t - Tc - TR)

- F(t - T)] (2.48)

But since we assumed that the second term is small

compared to the first term, because the bracket term is very


74

small, Eqn. (2.41) will be reduced to:

R(t) = F(t - T) TR + Tc < t < T 2 (2.49)


approx

The application of Eqns. (2.39 to 2.43), (2.45), (2.47)


3 /hr, TR = 21 hrs.,
and (2.49) to a typical system with A = 10

T 2 = 720 hrs., T c - 1.5 hrs. is shown in Table 2.1 and

Fig. 2.8. As we can see, Caldarola's first equation and

Eqn. (2.45) of this study give exactly the same results (they

are basically identical equations). Our approximated equations,

i.e., Eqns. (2.47), (2.49) predict very well in comparison. to the

exact results (compare columns 4 and 5). Bath the equations

used in PL-MODT (cf. Eqn. (2.43)) underestimate the unreli-

ability during the repair and overestimate at the end of the

interval. Vesely's equations overestimate conservatively

everywhere because of the nature of the approximation used in

the formulation. In fact, if one keeps the exponential terms

instead of its first two terms approximation in Vesely's model,

one will get the same equation as we obtained by approximating

our general equation (cf. Eqns. (2.47), (2.49)). We should

mention that the approximation is only true if [F(T2 - Tc) -

F(T 2 - Tc - TR)] 1.
In another example, we consider the following repair
distributions:

1) exponential with = -, TR - 21 hrs.


75

1.C
.g
I

10- 3
8
14 7

H 6
4
<4 3

10-1
9
.8
7
6

5 10 50 100 500
Time (hour)

Fig. 2.8: Comparison of the unavailabilities as calculated from the


equations developed in this study and those of Vesely's
and Caldarola's for a periodically tested component.
76

Ul L' '0 \'0


, O
;
,Iltot H t
o t o oLn
H0 00 o 0ooo on
o
Lo
0o
o
I) . (( (_ (N
o C)
o0 0 0 -q i n0n
**'I
-I)
.............
n C........
*-O O
0r
C, C ..
) ') ...

. O Oq O O
O O O O O

NO*d~,m H
*N C
r-O--1n,-
t) ,-
O
'0
C
O
-
'0 #9
0
N-
'0 o)
ot oc00
N
O
0
00 00 'N- 0 t0-·
0- Mt
oot
d0 (-I ur0 o

O4 C+
A : O O +- O O O 0 O 0 O CO O
H C C
00

: Ln N 0 t8) f
0(NM I
H .4
0a) oo t 0o i) N ) 0t)
'O N O O 0 O
cNI( H1 N 0) O O N dt N c)
vcR
U') ¢o rl *
* C) \s * b eH C)* m * oql 10o no so
C) C-o
>-
X 0 00 N- t 0O 0
O
00 U Z:
(1)

UJ
C) LI)

O
L)

O
C)

O
LI) 0

O 0
0 vCI

O
#))

O
C
O
LC)

O
N

O
OH

'N N O # O N C
t 0) oo t
00 LC N N- c. O "
n _ .g ) \0 N O 0 00 : C C) C 0
N) r-, N c N L) o o N dt N L)
*X 0 -1 v- v- 00 N- N dw O 0)
;W L) 00 00 O O v ) dt 00 N
* 0. o 0* o o0 * * 0

r.

4 H
O O O O ONO O O O O C O
r 00 rI v- ) #) 0 0 M 00 - N.
n~
O.H

eQ, k v, 0 o 0 oo0 0 0 0 (
r P. 0 rj
N
00 N O O 0o N
o c
o
v-I N N '0 3
77

2) 2nd order Erlangian _ 22 , T


TRR 21
21 hrs,
hrs.,
R
3) lognormal with p = Ln(TR), 0.5, T R = 21 hrs.

Table 2.2 summarizes the unreliability results by

utilizing Eqn. (2.45). None of the above references are

able to predict the unreliabilities for these distributions.


Therefore, a comparison was not possible. However, a comparison

of the aforementioned distributions with their given properties

shows a small difference in unreliability outcome for this

typical example, which signifies that we can interchange dis-

tributions as long as we maintain their mean values nearly

identical.

In the above example, both exponential and 2nd order

Erlangian distribution have the same mean, i.e., TR = 21,

and the lognormal has a median of 21 hours. In view of the

complexity of Eqn. (2.45) for the above distributions, the

exponential distribution is a good candidate for repair, ie.,

it is simple.and gives conservative values (see Fig. 2.9). For

detailed equations used in the analysis of Table 2.2, see

Appendix A.1.
78

__ _-
-
I -- , ....

r-4
U
fA * Cd
I-
c -u-I
I-.
Em'
rbPI
Cd$-

E-

a
I-= 00
Ln N 00 r N Ln N. LI) rn LI Ln oo
0) N t- 0 0) , ,t L 0) N
o cn 3 N. LI' N cn N 3 N. t o
u- N. N N N. - i- N N. N 0 1-
: c it O'N 1I' U') 0O
O I N , O
bO
o ii I:
C) O N N o o 0o · to4 u)
L N
O

L-
I
o

trl U)
C tC oC> t
) c0 c C; c
d 00
o ,,n) , o,0
,-N1 cn*o 'o . o o0)
0 N O - -I d " d ""e. CD
0d

O r4 tN \'
(N ott
Lcn
o 0 Li
u · d 00 r t· N
U
rl-
Cd O
LI)
0'
I"
p
N
r-
N
'3 '3
O
00 N
CO
O
N. N
4)
t
R,
o
L
cn
p4 *.0 0 0 0. 0 0 9- 0 0 0
P4 o C 0 0 0 0 0 C0 C) C) C 0)

Pc
I-ur
[.-
pE-
I.--
4
0
z
HE- Cd
· 9') '9') 0 '9') ') 4o oC0 · r.
o
vf
N ,4400Nv 4L)
o 4NON N00o
to''
O oo0 '3 o'3Hq VHN , n
G) I!
t u- N. '' N
00 O
0
0) N-
EH
o ;
O N C CO N. 00 N N. N t O L0')
LI) 1 N N O O O e d LI') Nw

* . .9 6 0 0_ 0 0. 0 D.

.r 4
x
0
-o
Cd
[- o L) oC C) L) O O O O O O C) OC
O - r- OC Ln O O O O O o
v- N N Ln' N C O oC o N
v- v-f N rt '3 N.
--
79

_ r _ I I __
I I I I I

1.0
9
8
7
6

m
3

-J
1-4
2 m
- Exponential

. --- 2nd order Erlangian

_4 --- Log-normal
J
LU
z
::: 10 - 1
9
I
8
7

m
4

3 I I I I I
0 150 300 450 600 750
TIME (HOUR)

Fig. 2.9: Comparison of the application of


Eqn. (2.45) to the different
repair distributions.
_(1

r.

ki·

"-

".
80

CHAPTER 3: PRINCIPLE STRUCTURE OF THE CODE UNRAC

3.1 Introduction

The knowledge of the availability of the emergency

safety features in a reactor is of major importance in

assessing reactor safety. One method which has been used

successfully in the past and is being reiterated in the wake

of Three Mile Island is Fault Tree Analysis (FTA). Most


vendors and utilities have been instructed by the Nuclear

Regulatory Commission (NRC) to evaluate the availability of

their emergency systems to ensure a safe condition of the

reactor should an accident occur. Because of the complexity

of the system, one has to have a tool (generally a computer

code), to predict the availability of the system. To do so,

there exist several codes (cf. Chapter 1), but each has its

own restrictions and usually one requires two or more codes

to do a complete job. Therefore, one has to be familiar with

three or more codes in order to adequately predict the

required system availability. In such cases, the probability

of making errors in data preparation will increase greatly

and the job will become cumbersome. Hence, there is an


incentive to develop a code which is comparatively easy to

use and, in addition, can accommodate the complete job in one

step. In this study, we develop a computer package which does

the following:
81

1) generates the minimal cut sets;

2) finds the importance of each basic component rela-


tive to system failure;

3) evaluates the point estimate, time dependent and


average unavailability and/or unreliability of the
system under study; and

4) simulates the top event distribution by Monte-Carlo


and finds the approximate distribution which
represents the top event by the method of moment
matching.

The package can be used for carrying out any of the above

analyses individually or totally. To discuss the basic prin-

ciple of the code, we divide the package into three sections:

a) Cut set generator

This part of the code generates the cut sets and

finds the minimal cut sets which are essential for

evaluating the top event unavailability.

b) Time dependent unavailability evaluator

This part of the code calculates basic component

unavailability depending upon the nature of the

components (maintained or non-maintained) and


parameters (failure and repair rates) involved.

c) Monte-Carlo simulator

This part of the code simulates each component's


distribution in order to find the top events
82

distribution and to determine the uncertainty in the

evaluation of top event from the given spread of the

component's parameters

3.2 Cut Set Generator

3.2.1 Introduction

The first step in FTA is to find the minimal cut sets.

In the past decade, several codes have been developed by

different researchers. The most popular among these are PREP,

by Vesely and Narum (1970), MOCUS by Fussel et al. (1974),

TREEL-MICSUP by Pande et al. (1975), SETS by Worrel (1975),

and WAM-CUT by Erdmann et al. (1978). The above selection is

a good example of the versatility of the techniques which have

been used to generate the minimal cut sets. PREP is the first

code in this series. Because of the inherent limitations of

the PREP code, MOCUS was written to replace it. MOCUS uses

a top down algorithm. The algorithm starts from the Top event

of the fault tree and moves, by successive substitution of the

gate equations, down the tree until only basic events remain

in the list of possible Top-event failure modes(i.e. cut sets).

In another study it has been found that by proper storage and

restructure of the fault tree, one can decrease both the stor-

age and time necessary to find the cut set relative to MOCUS

(Chatterjee (1974)). is basedon the above


TREEL-MICSUP
findingand the algorithm used in this code is a bottom up one.
In this code, the tree will be restructured first and then the code
starts to find the cut set and respective minimal cut sets.
83

In SETS, Set Equation Transformation System, the code symbol-

ically manipulates Boolean equations formed by a set of events

operated on by a particular set of union, intersection and

complement operators. Here the fault tree should be input as

a Boolean equation to represent each event as a function of

its input events.

A Boolean equation is a mathematical representation of a

gate, i.e., for two components inputted to an AND or an OR

gate one has to write:

C = A + B for OR gate

C = A * B for AND gate

C = is the output signal

The last code mentioned above, WAM-CUT, uses the TREEL-

MICSUP algorithm. First the code restructures the fault tree

so that each gate can only accept two inputs. The restruc-

tured tree will be stored from low level gates -- gates with

components input only -- to high level gates and finally top

event. Second, the code uses binary representation of basic

components, i.e., 1 for failed state and 0 for unfailed state

of the component, to be able to store each component in the

cut set by a bit of the word which represents the cut set.

By this method, one can store as many as 63 components in a


cut set word length in CDC 7600 or 31 components in IBM 370/168

as compared to 63 or 31 word lengths needed to store in MOCUS

or MISCUP.
84

All the above mentioned codes are written for large trees

and the initial storage required for each code is prohibitive,

i.e., WAM-CUT needs 1440K-byte storage for any job of up to

1500 components and/or gates. Rosenthal (1975) pointed out

that in general we cannot hope to find a "fast" algorithm for

arbitrary fault trees, and the only hope for analyzing large

trees is through the application of tree decomposition methods.

Therefore, there is an incentive to look for an algorithm

which requires less storage and which is comparatively fast to

analyze the large decomposed trees. In addition, it should be

able to couple the code with others without hampering the code

performance and exceeding certain storage limits. For quali-

tative and quantitative analysis, one needs the top event

minimal cut sets only, therefore one should be careful not to

sacrifice storage and efficiency in order to get more capabil-

ities out of a given code.

3.2.2 Cut Set Generator Used in UNRAC

To generate the minimal cut set from a given fault tree,

we chose BIT and modified it to suit our needs in this study.

BIT is a portion of an unpublished work by Dr. Wolf (1975) in

which one could only use a fault tree with a limited number of

components and/or gates. After a series of modifications and

improvements, the BIT code was benchmarked against MODCUT

(Modarres 1979) and WAM-CUT (EPRI-1978) to check its accuracy

and efficiency. The reason for choosing these two codes was

that both use binary representation of the components to


85

generate the minimal cut sets. However, MODCUT is written

in the PL/1 computer language whereas WAM-CUT and BIT are in

FORTRAN IV.

The BIT code uses a top down algorithm. It generates the cut

sets by successive replacement of the gates through the fault

tree logic by their respective inputs. BIT is written in such

a way that the user could easily follow the steps in the code

and, to prevent confusion, the major steps in the code have

been separated from the main routine by proper subroutines.

Figure 3.1 shows the flow chart and steps used in the BIT.

Since cut set generation is a time consuming process, a

discrimination procedure which eliminates cut set of given

size has been encoded to accelerate the computation when

quantitative evaluation is a desired goal and the component

unavailabilities are small. The discrimination process is at

its best when one uses a top down algorithm. Otherwise, one

has to find all the intermediate cut sets before one is able

to discriminate them. In order to see how the discriminatory

process is important for some fault tree see Fig. 3.2 (EPRI

(1975)).
BIT is also capable of handling both complement and basic

events. Therefore, it allows the user to have NAND, NOR, NOT

and Exclusive OR (EOR) gates in his fault tree. However, by

proper transformation of special gates, such as NAND, NOR and

EOR, one can easily end up with the basic logics, i.e., AND,

OR and NOT gates. Fig. 3.3 shows such transformation.


86

atrr

/ Padi i n.
* no. of comp. & gates
* output type & cut size
* fault tree logic

i II I I I I

Initialize:
* components gates
· put II and I=1
* store the Top Gate in
cut set II

,>.
i u~~~

Is
the gate No
L
an OR gate?

Store all of
Yes
m i I ! Ii gate's input
Locate a in
Store the first gate's cut set II
gate
input in cut set II
in cut -- . .

set II

Store each of the other


gate's input in a
separate cut set I
I
_

Yes

CALL MCSN1

CALL MCSN2 a supe

--
-~ lcae es

Fig. 3.1: Flow chart and steps used in cut set


generator BIT.
87

No

Yes

sort cut sets


according to
the cut set size

Fig. 3.1: continued.


88

,6
IU

10'

4 S

m, 10 3
o
.rq 5
0 010
U
m: 5

200 400 600 800 1000 1200 1400 1600 1800 2000

Number of Fault Tree Components

Fig. 3.2: Computer time to deterministically


find minimal cut sets, (EPRI (1975))
89

I I

A B EOR

B A
A B

A B NAND
A B

NOR
A B

A B

· Y~LI A

A NOT

Fig. 3.3: Equivalent transformation of EOR,NAND,


NOR, and NOT gates
90

To clarify the BIT algorithm described above, we do the


following example. Figure 3.4 shows an electrical circuit which

was originally used by Fussell (1973) and has been used by

several authors by including and/or excluding different parts

from it to show their methods of generating cut sets (prime

implicant), i.e., see Bennett (1975) and Henley Kumamoto

1978). Figure 3.5 shows the transformed fault tree after

replacing EOR by basic gates and combining some of the suc-

cessive gates given in Bennett (1975). By proper initializa-

tion of components and gate, we have a total of 9 components

and 11 gates.

BIT starts by initializing each component and/or gate

uniquely by storing 1 in a bit of the word related to the

component and/or gate number, i.e., component I will be stored

as:

ME (I) = 2**(I-1) I< 31

for I 1

ME(1) - (0000 0000 0000 0000 0000 0000 0000 0001)2

or to show it more compactly

ME(1) = 0 0 0 0 0 0 0 oool = B'l'

and for I = 20

ME(20) = 0 0 0 looo 0 0 0 0 = B'20'

B'.' = represents bit occupation No. of the word (The


rest of bits are zero)
91

POWER
SUPPLY 1 LIGHT
BULB

POWER
SUPPLY 2

SWITCH

TOP EVENT Z: No light


PRIMARY FAILURES A: Bulb failure
B: Power supply 1 failure
C: Relay contactsstuck open
D : Circuit breakerstuck open
E: Switch stuck open
F :Switch stuck closed
G: Power supply 2 failure
H : Relay toil open circuits
I: Circuit breaker coil open circuits
INITIAL CONDITIONS: Switch closed,relay contactsclosed,circuit
breaker contactsopen.
CIRCUIT ACTION: If relay contactsopen, an operatoropensthe
switch causingthe contact breakerto close
thereby restoring power to the bulb.
NOT-ALLOWED EVENTS: Operator failure, wiring failure, secondary failure.

Fig. 3.4: Detailed information of operation and


failure modes of an electrical wiring
system (Bennett (1975)).
92

-8 -9 5 7 8 9

Fig. 3.5: Compact fault tree of the electrical


wiring system.
93

In the top down algorithm we start by reading in the

fault tree from the top event and for each input to the OR

gate we increase the number of cut sets. Each cut set in the

code consists of two parts: the positive part which stores

all the basic events and gates and the negative parts in which

we only store the complements of the events called KM and MK

array in the code respectively. Since there are only 31

available bits in a word (IBM 370/168), we have to combine

a series of word lengths to get the desired components and/or

gates capability in the code. Here we use only 1 word length

because the total number of components and gates is twenty.

The top gate is an OR gate, gate 10. Therefore, we have

two cut sets:

KM(l) = B'll' MK(1) = 0

KM(2) = B'1' MK(2) = 0

Cut set No. 2 is a component No. 1, but cut set No. 1 is

gate 11. Therefore, we replace gate 11 in cut set 1 by gate

12 and 19.

KM(1) = B'12,19' MK(1) = 0

Gate 12 is an OR gate therefore, we increase the number

of cut sets

KM(1) = B'2,19' MK(l) = 0

KM(3) = B'3,19' MK(3) = 0

KM(4) = B'13,19' MK(4) = 0


94

In cut set 1 we have gate 19 which is an OR gate,

replacing it by its input we have:

KM(1) = B'2,20' MK(1) = 0

KM(5) = B'2' MK(5) = 0

KM(6) = B'2,4' MK(6) = 0

We continue the replacement procedure until there is no

gate in the cut set.

KM(1) = B'2,6,14' MK(1) = 0

KM(1) = B'2,6,15,16' MK(1) = 0

KM(1) = B'2,6,16' MK(1) = 0

DI(7) = B'2,6,16,18' MK(6) = 0

KI (1) = B'2,6' MK(1) = B'6'

KD(8) = B'2,6,17' MK(8) = 0

At this stage, since the positive part of cut set number

1 is composed of basic events only, the code first combines

the two parts of the cut set to nullify the cut sets which

contain both complement and basic event of a component (i.e.,

A. = 0), see Appendix B.1. In this example, combination of

KM and MK results in:

(KM,MK) = B'2,6, 6' = 0

The above process is executed in Subroutine MCSN1. Sub-

routine MCSN1 is also used to compare the new cut sets with

the previously generated cut sets to check if it is an identi-

cal or super set. If it is neither, then the code calls


95

Subroutine MCSN2to check for the existence of any super sets


corresponding to new cut set. If so, it should be nullified.
The basic logic used in these two subroutines is the combina-
tion of ANDand OR logic, i.e., if CS1 and CS2 are the two
cut sets to be compared and CS! is the new one, then we have

the following cases:

a) IF (IAND(CSl,CS2).EQ.CS2) (Subroutine MCSN1)

CS1 is either identical or a super set to CS2

b) IF(IAND(CSl,CS2).EQ.CS1) (Subroutine MCSN2)

CS2 is a super set of CS1

In case a), CS1 will be nullified whereas in case b),

CS2 will be nullified.

After the comparison and elimination process, the code

will start the substitution process by replacing the last cut

set number for the one just eliminated. Here, cut set No. 8

will replace cut set No. 1 and the code will continue by

substituting for gate 17.

KM(1) = B'2,6,5' MK(1) = 0

The new cut set No. 8 is:

KM(8) = B'2,6,71 MK(8) = 0

KM(9) = B'2,6,8' MK(9) = 0

KM(10) = B'2,6,9' MK(10) = 0

Again cut set No. 1 is a gate free cut set. Therefore,

subroutine MCSN1 will be called. Since there exists cut set


96

No. 5 which has component No. 2, cut set No. 1, which is a

super set of No. 5, will be replaced by cut set 10 and the

process will continue. In fact, all the previously found

cut sets which have component No. 2 in them will be cancelled.

This elimination and substitution will continue until the

code has swept completely through the fault tree and found all

the minimal cut sets. The final results for this example are

shown in Table 3.1.

The example in Table 3.1 was checked against the results

of Bennett (1975) and Henley Kumamoto (1978). It took BIT

a fraction of second to generate all the cut sets (i.e., 0.07

sec.).
In another example, the code was checked against MODCUT.

The fault tree used is shown in Appendix B.2. It generates

the 86 cut sets in 0.84 seconds compared to MODCUT which

takes 9.8 seconds.

In the third example, the code was checked against

WAM-CUT. Here we do not have any processing time for WAM-CUT.

The BIT code generates all the minimal cut sets in 0.35

seconds. The fault tree and the respective minimal cut sets

are shown in Appendix B,.2.

As we discussed earlier, cut set generation is a lengthy

process and in order to shorten it one has to be familiar with

the tree and be able to reduce it as much as possible. There

are many methods that can be used to reduce the fault tree:
97

Table 3.1: MINIMAL CUT SET GENERATED BY BIT


FOR THE EXAMPLE FAULT TREE

Components in the Cut Set


-
Cut Set No. Name No.

1 A 1

2 B 2

3 CD 3,4

4 DEF

5 DFG

6 DFH 4,-6,8

7 DFI 4,6,9
8 CEF-HI 3, , 6 7,,'
98

1) Combine the successive gates as much as possible.


However, this can only be done if the successive
gates have the same type of gate.
2) Break downthelarge tree to small non-repetitive
sub trees.
3) Combine some of the components inputted into a gate
or different gates and represent them as a special
component.Thismethod is very useful in quantita-
tive analysis. To determine the properties of a
special component in termsof individual component
properties, see Appendix B.3.

3.3 Unavailabilty Evaluator

3.3.1 Introduction

Quantitative evaluation of system reliability is con-

cerned with synthesizing reliability characteristics for the

TOP event from basic component reliability characteristics.

Most published methods for quantitative evaluation require

minimal cut sets as input to represent the system model.

Component failures represented in a logic model, i.e.,

fault tree or event tree, can either be statistically dependent

or statistically independent. Conventional reliability analy-

sis methods assume that component failures are statistically

independent. However, dependencies can be easily modeled in

the logic diagram so that the events in the final minimal cut

sets are statistically independent, see Gokcek et al. (1979),

Elerath Ingram (1979), EPRI 217-2-2 (1975). One of the

methods which can be used to model the dependencies is the


99

Markovian model. By this method, one can easily separate

the dependent logic and show the combined system as a non-

dependent event in the fault tree. But, one should bear in

mind that the above model sometimes requires the use of a

computer program for solving linear differential equations.

In certain cases, the method discussed in Chapter 2 might

be helpful in finding the system reliability function.

Another method for handling dependent events requires

redrawing the fault tree. Suppose one wants to model the

dependent event B in which the probability of event B is

dependent on the occurrence of event A. Then one way to

show the dependency is to use the building block shown in

Fig. 3.6.

As it is shown in Appendix B.1, the event space in Fig.

3.6 represents the logical expression:

B (AnB/A) U (AflB/A)

Where:
B/A = Event B given the occurrence of A

And
B/A = Event B given the occurrence of A

The above identity holds since, if we assume

C = (AnB/A) (ACB/A) Then


C = (AnB)U'(An B) from the definition of events
B/A B/A
100

.:

Fig. 3.6: Fault tree showing a dependence


of B on the occurrence of A.
101

Thus:

C = Bn(AUA) = B

When applying this model to large fault trees, it must

be recognized that the dependent event building block is

irreducible.

Figure 3.6 could be extended to model common mode failure.

For example, the interaction of a number of components to a

common mode initiator (such as flooding or fire) can be

described by incorporating a dependent event building block

for each component. Figure 3.7 shows a model for describing

the top event B of a system in which the common mode initiator

event A is specified.

Knowing this, we can model our logic diagram i.e., fault

tree, such that each basic event represents an independent

event and we can develop certain mathematical models that will

allow us to predict individual basic event unreliability and/

or unavailability. Finally, by using the minimal cut set we

can calculate the top event unreliability and/or unavail-

ability.

The first comprehensive unreliability analysis of


emergency safety features was performed in WASH 1400 (1975).

In that study most of event occurrences were assumed to be


constant. But, since most of emergency safety features are

redundant systems and they are periodically tested to insure

their readiness, one has to make a time dependent unreliability

analysis to be able to predict the system unavailability more


102

Fig. 3.7: Dependent event conncecting fault tree.


103

realistically. In today's quantitative codes only KITT 1 2


(Vesely et al. (1970)), PL-MODT (Modarres (1979)) and FRANTIC

(Vesely and Goldberg (1977)) are time dependent evaluators.

KITT 1 uses exponential failure and repair distributions with


constant parameters but is unable to analyze the periodically

tested components. KITT 2 is the same as KITT 1 except that


it has the capability of accommodatingstep wise time depen-
dent repair and failure rates. FRANTIC is an approximated

KITT code with the capability of handling periodically tested


components. The above codes are written in FORTRAN IV. PL-

MODT is a PL/1 code which uses modular techniques and has the

same capability as the FRANTIC code. All the above codes

have prespecified failure and repair distributions, and most

of the time the distributions for the system being analyzed

are not the same as those which have been coded. Also, some
systems require more rigorous evaluation because of the exis-

tence of different types of failures and paths that the system

may undergo during its change of state. For example, consider

a pump. A pump can be in one of the following states at any

given time:

1) up state (working or stand-by)


2) under the test

3) under repair

Because of certain mechanical difficulties or economical


reasons not all the possible failure models can be immediately

detected upon the failure, (i.e., it is impossible to allocate


104

a detector for crank shaft failure or it is uneconomical to

install detectors for all the ancillaries). Therefore, there


is a chance that the pump must go under the test in order to

find its malfunction, or there is a chance with probability

P1 that the pump can be started by resetting its control


panel. Figure 3.8 shows all the possible paths that the pump
may undergo during its change of state.

3.3.2 Unavailability Evaluator Used in UNRAC

For our unavailability evaluator we used the models


discussed in Chapter 2 for the maintained and non-maintained

systems (components). For the maintained system, we consider

a variety of distributions as a candidate for repair distribu-

tion and they are:

1) exponential

2) constant

3) Erlangian (special Gamma)

4) log normal (approximated by combination of


exponentials)

The related reliability equation for a two-state system with

exponential or constant distribution has been discussed

earlier (cf. Eqns. 2.12 and 2.18).

In the case of periodically maintained components, the

general equation developed in Chapter 2 (cf. Eqn. 2.38) has

been used with the following repair distribution capabilities:

1) constant

2) exponential
105

P4

UP REPAIR

TEST

Fig. 3.8: All possible paths of a pump failure.


106

3) Erlangian

For the detailed equations see Appendix A.1.


To be able to analyze a 3 state component, see Fig. 3.8,

we model the component as a semi-Markovian process. Figure

3.9 shows a detailed 3 state system with all the respective

holding time distributions. Using the method discussed in

Sections 2.3.1 and 2.3.2, we can show that the probability

that the system will remain in state 1 at time t given that

it was at state 1 at time zero have the following Laplace

transforms:

1- F(s) (3.1)
f11(s)- s[l-P4 F*(s) G*(s)-P 3 F*(s) PP1 3F (s) P 2 T*(s) G*(s)

The above equation can be easily transformed to equation

(2.16) by putting P 3 =0 and P 4 =1. Therefore, we used this

equation as a general equation for the monitored system

(component). The application of equation (3.1) to the afore-

mentioned repair distributions are:

1) If we assume repair and test distribution to be

exponential then we have

G*(2) =

T*(s) V
V+s (3.2)

F*(s) = X

where , p, v, are failure, repair, test completion

rates and they are assumed to be constant. Hence, by


107

P4,F(t)

UP REPAIR

P3 ,F ,(t)

TEST

Fig. 3.9: A general three-state component.


108

substituting for G*(s), F*(s) and T*(s) in Eqn. (3.1)

we get:

(P + s) (v + s) (3.3)
11 (s) =
s[s 2 + as+ b]

where

a = X + v+ p- P1P3X
b = v+ v + P3X - P1P3(v + p)

Inversion of Eqn. (3.3) to the"t domain results

in a series of equations which are given in Appendix

A.2. The asymptotic value of 1 1 (t) can be easily

shown by applying final value theorem:

¢11(X = lim 11(t) = lim [SO11(s)] (3.4)

Hence

+l1 ( X ) - -- v5 (3.5)

2) If we assume that the repair distribution is a 2nd

order Erlangian then 11(s) will be:

( + S) 2 (v+ s)
011 () = (3.6)
s[s3 + as2 + bs+ c]

where

a = + v+ 2- P1Ps
109

b = 2(A + v) + P2 + vA - P 1P 3 (2 + v)X

C = (v + A) 2 + 2v - P 3PlX(2v + ) - P 4Ai 2

Again we only show the asymptotic value here and the

rest of the respective equations are given in

Appendix A.2.

- 1 1c ( ) (3.7)

3) If we assume the repair distribution to be lognormal,

we can approximately match it with a combination of

exponential distributions, i.e. we can assume the

repair distribution, g(t), to have the following


form:

g(t) = A[eXt e-X2 t] (3.8)

where

2 X1
A by normalization is equal to X A

1X 2
and G*(s) = (A+ s) + ) (3.9)

using the above equation for G*(s), the asymptotic


value for pll(t) will be:

v (3.10)
11 )°°- c (3.10)
110

where

c = (v ) 2 + ) - PP(X + + A1A2)

- P4 A1A2 A

and the rest f the related equations are given in

Appendix A.2.

Since the cumulative repair probability distributions signi-

fies the completeness of the repair, we compare the lognormal

and its approximated (matched) combination of exponentials by

theircumulative distribution values. Figures 3.10-to 3.12

show the above comparison for two typical examples. As can

be seen, for a highly skewed lognormal (large a), the dif-

ference between the results is on the order of maximum 2%

(Figs. 3.10, 3.12). But for a small standard deviation

(small a), the lognormal repair distribution approaches 1

faster than its approximated distribution. Therefore, the

combination of exponentials produces reasonably accurate

(for largea.) and/or conservative (for small a) results. For

detailed values of Fig. 3.10 to 3.12 and for the method for

approximating lognormal, see Appendix A.2.

Having prepared the reliability equations for all the

possible types of components involved in a system, we need a

general system function to be able to evaluate the systems

reliability. As we mentioned earlier, most of the quantitative


codes use minimal cut sets as input to represent the system

model and the general system function given the minimal cut
lll

C
0
-
o

=1
1.
-

4J
W

.0
0
0.
1.

S
o
n

to
4-.
I
4j

Time (hr)

Fig. 3.10: Comparison of cumulative probability distribution


of a log-normal with its approximated combination
of exponentials (median of 21 (hrs) and error
factor of 2.56)
112

nA an ____

99.9

99.5
99

95

90

L80

40

30

260

10

5
10

0.5

0.1

Time (hour)

Fig. 3.11: Comparison of cumulative probability distribution


of a log-normal with its approximated combination
of exponentials (median of 21 (hrs) and error
factor of 1.4)
113

99.9!

99.S

99..
99

95

, 90
-9

'
4.
80

a 70
60

i 20
10

L.0
0.4

0.1
0.1

Time (100 hrs)


Fig. 3.12: Comparison of cumulative probability distribution
of a log-normal with its approximated combination
of exponentials (median of 100 (hrs) and error
factor of 3)
114

sets has the following form:

N
QTop H (3.11)

where:

qj is the component unavailability or unreliability

j is the component number in s cut set

N is the total number of cut sets

J . is the union and by definition:

m m
kY. = 1 - r(l-Yi); Yi is any function

Expanding Eqn. (3.11) in terms of cut set unavailability

(Qcz
i'Qci = 7 qj) we get:

N N N
QTop ilc.
1i
l 11QciQc.j + i 1 Q QC &l13R
i<j i< j<t

N
+ (-1) ]iQci (3.12)

Evaluating the QTop for a medium size fault tree requires

a lot of computation time. Therefore, we approximate


Eqn.(3.12) by the method of inclusion-exclusion principle

discussed by Feller (1957) and Vesely (1970).


a) first term approximation, sometimes called rare-

event approximation
115

N
QTop -. QCi S1 (3.13)

b) first three terms approximation

QTop S -2 + (3.14)

c) first 5 terms approximation

QTop - 52 + 3 - 4 + 5 (3.15)

where
Si is the ith term in Eqn.(3.12).

The degree of approximation required to find the QTop is


usually dependent on the component unreliability and overall

size of cut sets involved. For low minimal cut set size, we

sometimes have to use a higher degree of approximation to get

a more accurate result. Occasionally, the following equation

is used to find the range of the top event unavailability.

1 - 2 < TQTop< (3.16)

To prepare our unavailability evaluator, we restructured

FRANTIC (Vesley and Goldberg (1977)) to incorporate our

models. We call the package UNReliability Analysis Code or


UNRAC. The code is organized in such a way that any user with
a limited knowledge of the computer can follow the code should
an alteration or modification be deemed necessary. Figure 3.13

shows the logical flow chart used in UNRAC. For more infor-

mation about the structure of the code see Appendix C.


116

MCSN1
VTA T rC)
M--
LDECOD

- COMDAT

MONTOR- SOLN
- QCAUN
EXAGAM

- TIMES
SOLNT
UNRAC - CIMPOR QCPONT - TEXGAM - QUNAV
L SYSCOM

SOLNT
- COMP QCPONT - TEXGAM- QUNAV

L SYSCOM
QAVERG

- QPRINT

- QPLOT -- - GRAPH - ENDPLT

Jr
_ MCSIM - mr TM XVART
L-SYSCOM

Fig. 3.13: Logical flow chart used in UNRAC.


117

In order to check the accuracy of the code we benchmarked

it against PREP and KITT and PL-MODT. Figure 3.14 shows a


sample fault tree and respective failure rates which appeared

in Vesely (1970) and Modarres (1979). Figure 3.15 shows the


results of this study with PREP and KITT, PL-MODT and BIT-

FRANTIC (Wolf and Karimi (1978)). As we can see, the results


bf UNRAC are more consistent with the exact results than

either FRANTIC or PL-MODT. In fact, with the use of the first


3 terms approximation, UNRAC produces almost identical results

to the ones given in Vesely (1970), see Table 3.2.

3.4 Uncertainty Bounds Evaluator

3.4.1 Introduction

The unreliability of the top event of a fault tree is

evaluated from the unreliabilities of the basic events con-

tained in the tree. This quantification is a straight forward


process when the basic events (components) parameters (failure

rate, repair rate, etc.) are exactly known. However, uncer-


tainties exist in components' data because:

1) lack of consistent information on the failure and

repair rates,

2) diversity in manufacturing specifications.


Therefore, any quantification analysis is incomplete without

information on the uncertainty of the top event unreliability.

There are two general methods of propagating uncertainties.


One of these is an analytic approach, Lee and Apostalakis

(1976), where the authors first use the Johnson family


118

Min. Cut Sets Component Failure Rate

1,2 Comp. # x 10 6 hr-1


1,3
1-3 2.6
2,3
4-6 3.5
4,5 7,8 5.0
4,6
9,10 8.0
5,6

Fig. 3.14: Fault tree diagram. Minimal cut sets


and failure data of an example Reactor
Protection System (Vesely (1970)).
119

_ __ ___ _
I I I. I I

3 _
J.1

10 1
9
0 0
8
7

6
>_j
+j 5

.I-
)
4
r-
.r
3 -

2 t

10-2
9
8
7

3
I I I_ _ I I
20 40 60 80 100
Time (102 hrs.)

Fig. 3.15: A comparison of UNRACwith PREP


KITT, PL-MODT, and BIT-FRANTIC.
120

Table 3.2 COMPARISON OF THE RESULTS OF THIS STUDY WITH

RESULTS OF KITT FOR AN EXAMPLE FAULT TREE

Time Vesely (1970) 1st Term Approx. 1st 3 terms Approx.


x 103hr
Exact Q Q Q

0.0 0.0 0.0 0.0

1.0 3.49 - 03 3.569 - 03 3.488 - 03

2.0 1.32 - 02 1.379 02 1.317 - 02

3.0 2.80 - 02 2.998 - 02 2.80 - 02

4.0 4.70 - 02 5.152 - 02 4.704 - 02

5.0 6.95 - 02 7.782 - 02 6.95 - 02

10.0 2.12 - 01 2.636 - 01 2.117 - 01


121

distributions, (see Hahn and Shapiro (1967)), to approximate


each components' distributions, then find the first two

moments of the top event function. Finally they use these

two moments to estimate confidence bounds by standard inequal-

ities (e.g. Tchebychev, Cantelli) or empirical distributions

(moment matching technique). This method is only useful for

small fault trees.

The other approach is the Monte-Carlo simulation routine

which is presented in this section and can be used to obtain

an empirical Bayes estimation of the probability interval of

the system reliability (unreliability). In this procedure,

each components failure probability is represented by an

appropriate distribution. And in order to evaluate the top

event probability, each components failure probabilities

will be simulated randomly according to their respective dis-

tributions. One of the drawbacks of this technique is that a

high degree of resolution for the top event distribution,

requires that the sampling be done a large number of times.

Error propagation by means of Monte-Carlo simulation has

previously been investigated in the Reactor Safety Study, [WASH


1400 (1975)) and many ideas and assumptions used in this
study are adopted from that report. Several codes have been

written to compute the probability distribution of the top

event, i.e. SAMPLE (WASH 1400 (1975)), LIMITS (Lee and Salem
(1977)), MOCARS (Matthews (1977)).

All the above mentioned codes require user-supplied


122

function as input to identify the logical dependence of the

components in the system that is being analyzed. For a very


large fault tree and/or for a medium tree but with a lot of

repetitive events, the construction of this equation is a non-

trivial process and there is a high probability of introducing

errors. Therefore, there is an incentive to have a Monte-

Carlo simulator coupled with our unreliability analysis code.

3.4.2. Monte-Carlo Simulator Used in UNRAC

The Monte-Carlo routine used in conjunction with UNRAC is

called MCSIM (Monte-Carlo SIMulation). Here we will cite


the mathematical concept of the Monte-Carlo simulation without
proof, and the reader will be referred to Dixon and Massey

(1957), Cashwell and Everett (1959), Hammersley and Handscomb

(1964), and Mann et al. (1974) for details.

MCSIM allows simulation to be carried out for fault trees

having the following component types:

1) components with constant failure probability

2) non-maintained component

3) monitored component

4) periodically tested components.

And it can accomodate the following types of distributions


for the component's parameters:

1) exponential

2) lognormal

3) gamma

4) normal
123

It also has the following capabilities:

1) permits simulation for both the average and the time

dependent unreliability, and

2) allows propagation of error bounds on the following

failure parameters:

a) failure rate

b) mean repair time

c) mean test time.


All the aforementioned properties are encoded in the

XVART routine.
In general, there exist two steps in every Monte-Carlo

simulation code used in reliability analysis. First, one

simulates the desired distribution and then sorts the

results in an ascending or descending order. Therefore, there


exist two time consuming steps that one has to overcome in

order to have a comparatively efficient code. For the first


step, numerous reports on accelerating random number generation

have been written. Among these are: Conveyou and MacPherson

(1967), MacLaren, et al. (1964), Marsaglia, et al. (1964a) and

(1964b). The above authors discuss the fast methods to


generate Uniform, Exponential and Normal random numbers. And,
in general, these are sufficient to generate any kind of dis-

tributions random numbers. For example, if one wants to


generate a Gamma random number, oneneeds a combination of

Exponential and Uniform random numbers. In the case of Normal


random numbers, one can use either the Central Limit Theorem
124

or use a combination of Uniform random numbers. Marsaglia et

al. (1964a) suggest that the latter method needs less storage

than the former one. Here we discuss how to generate a

Uniform random number between zero and one, U (0,1).

According to McGrath et al. (1975) for computers with an

available word length less than or equal to 32 bits, the most

efficient random number generator has the following form:

Xn+l = a Xn(Mod 2) (3.17).

where

P is the number of bits (excluding sign) in a word;

for IBM 370, P = 31

a is called the generator which should be carefully

selected

Xi(Mod 2P ) is the remainder of Xi/2P

The above routine for generating a random number is fast

and will produce numbers whose properties approximate random-

ness sufficiently close for valid use in Monte-Carlo simulation;

provided that special care is taken in choosing a proper

generator. McGrath et al. (1975) suggest a = 513 In general,

X i repeats itself after at most 2 P steps. For further infor-

mation about different distributions random number generators,

see Appendix C.

In order to accelerate the sorting routine, we use a

procedure developed by Shell (1956) and used in LIMITS Code.

Figure 3.16 shows the difference between SAMPLE and LIMITS


125

1000
-

i
_.

1-p

-r- ) 4. 4. 4 4 -
4 4 l ,

3
i 4

I
I I
-lj -- r JJ ·
Jg
i i i
!
r rod
SAMPLE
- raw r
_
l l
I --I I -I
l l
F
L l
-- F- - - t
-
-
I-
I I I I
- -
1
-
- i I
- -
I
- - I- I
-
I I I I
I I
C
3
I -

-
-

- -·
3

-
.
- ----
.I. .
-
.1:
L --
. .
· .
. . .-
.
I I .t .1 . !
.I . I.
I II1I m: .
I I I
100

4
? 4
3 I. 4 + 4 i 4

l:
oU I 4
4i
4

vC)

t i r + -4+-11-i 11 11 -I-[ tI II
C- i 1! =r ITII 1' II I I I I I I~-
: 10
C;

LIMITS

5 10 15 20

Number'ofTrials in Thousands

Fig. 3.16: The execution time of the LIMITS


and the SAIPLE Codes (Lee Salem
(1977)).
126

execution time for a special case which appeared in Lee and

Salem (1977). The flow chart of the sorting routine is shown in


Appendix C.

Another important consideration of the Monte-Carlo simu-

lation is the accuracy. The accuracy of a simulation can be

indicated in several ways. An example of these is the simple

non-parametric measurement of the accuracy of an estimated

distribution by sampling given by Dixon and Massey (1957). An

example of their accuracy measure for large sample size N can

be expressed as:

Pr(IXs - Xp < 1.36) = 0.95 (3.18)

Where Xs is the estimated distribution fractile, and Xp is the

corresponding exact cumulative distribution value of the under-

lying population from which the sample was randomly picked.

This measure of accuracy is too conservative and requires a

large number of samples. For example, if we want to be 95%

sure that the estimated distribution does not deviate more

than 0.01 from the exact distribution throughout the entire

range, a sample size of not less than 18,496 is required.

Cashwell and Everett (1959) found the following equation by

using Tchebychev's inequality and binomial sampling:

t u2

Pri N e = n e du + R (3.19)

Where N is the number of trials M is the number of success,


127

t = N and q = 1-P. The quantity R is the error associated


- pq
with the probability measure and is given by:

t2
IRI.<e
/R1. 2<q + 0.2 0.25
+Npq Ip-q + e
e (320)
(3.20)
R2Npq Npq
Equation 3.19 can be put into a more familiar format:

P (l PI < e) = erf(-t) + R (3.21)

The above set of equations do not have any specific limitation

as to where it can be used. They are applicable to any distri-

bution as long as the sampling is taken from an identically

distributed random variable.

In Eqn. 3.21, M is the actual probability limit associated

with Xp which is the P fractile point in our simulated distri-

bution, and if we denote it as X(P), then we can write:

Pr(JX(P) - P < ) = erf(-t) + R (3.22)

For a given number of trials N, and P, the above probability

is constant and will not vary with distribution. For example,

if we fix N = 2000, P = 0.95 and -= 0.4%, then

Pr(lX(O.95) - 0.951 0.004) = erf(0.58) = 0.5879

IRI = 0.039

which indicates that with probability 58.79%, the value calcu-


lated for the 95% confidence limit lies between 94.6% and 95.4%

confidence limits, The maximum error in this case is 3.9%.


128

MCSIM gives this information with the final top event unavail-

ability.

3.5 On the Distribution of Input Variables

This section is devoted to a description of the various

forms of distributions that are used in this work for

expressing the unreliabilities of input variables, i.e.

failure rates, repair rates, etc. The question of how these

forms can be obtained from existing information, theoretical

models, engineering judgement or subjective considerations, is

not addressed in this study and the reader will be referred to

Mann et al. (1974).

The input variables of the various components of a system

are assumed to be positive random variables taking values in

any interval of the positive real axis. The following are the

list of Pdf considered in various parts of this study.

A. Gamma Probability Density Function (pdf)

The gamma pdf is used to describe the distribution of

continuous random variable bounds at one end. The gamma Pdf

is defined by:

X tn.-l -),t

f C(t;nk)= t>O,A>O, n>.O (3.2,3)


Yf(t;nX) 0 elsewhere

where r(n) is the complete gamma function defined by

r(l)= xlexdx (3.24)


129

and if n is a positive integer then:

r() = (n. - 1)! (3.25)

In the definition of the gamma distribution Eqn. (3.23)


n can take on any positive value. When n is restricted to

integers, the gamma distribution has been referred to as Er-

langian Distribution. When q= 1, gamma distribution will be

an Exponential distribution. In this study we used a general

second order Erlangian distribution and an exponential distri-

bution whose pdf's are defined by:

2nd order Erlangian, n = 2


a)

f(t;X) { A te t(3.26)
0 elsewhere t > X>

b) exponential

ACXt
e
=
f(t;X) lX
X > 0, t > 0 (3.27)

B. The Lognormal Probability Density Function

The lognormal pdf describes the distribution of a con-

tinuous random variable bounded from below by zero and whose

logarithm is distributed according to a normal pdf with para-

meter and a. The normal distribution has the following Pdf:

fN(t;a) = 1 exp[- (t -p) ]


aE 2a t
a > 0 + < < - (3.28)

Thus, the lognormal pdf is defined by:


130

JL1
(1 exp[- -- (nt 2
I
flt;V02)=t /2
0 elsewhere t >0, a > 0, (3.29)

- < < -X

The first four moments of the above distributions are given in

Table 3.3.

To generalize the equations (3.24, 2.36, 2.37, 2.28, and

3.29) in order to describe a random variable that has an

interval other than [0,1], say [X,1], we replace t by (t-X)

in the aforementioned equations.

3.6 Moment Matching Method

A distribution is completely defined once all its


moments are known. However, many distributions can be ade-

quately described by the first four moments. In addition, the

first four moments play an important role in fitting empirical

distribution and in approximating the distribution of a random

variable.

Wilks (1962) argues that probability density functions

(pdf) of bounded random variables with a finit.e number of


lower moments in common exhibit similarities, since in the

limit (all moments the same) they would coincide in a unique

pdf. The unreliability of a system being a probability, is


bounded because it can take values only in the interval [0,1],
and therefore, the moment matching can be applied if its n
first moments are known. Obviously, the more moments that are
131

- - - - __ · ___I_ I _··___I___ ____ I___ ____

zO
t NN
Po
'.0
Ii
I-I -I
+ 0
x--, I-
0
H +
M-
.9t
U
r-_
W~
N

0 I!
N N ,-I

u
H
;- N
)0
I

U,
_.S

c- N :1
eC r< )
N b
9 r4

H 0
4Jr ;; e:
P4 cd1

,< .
0 Xa aC14
.r< N
En 54O
r<
U), w4
u 0 -I

bO
00
rn
r a)
.rq
Na) +j
9V
Cd 0
9 P 0 0 9
W.
0
OF:
E 0 x4 O b1-
O
Z Id
Id -I P4
K
..
oq

_ _
132

available, the more exact the approximation would be. In


most instances, however,the first four moments are adequate.
This is the case when a two parameters pdf (like the ones
presented in Section 3.4.1 or a memberof the Johnson or
Pearson families) is chosen as an approximation. The third
and fourth moments determine the shape or the "type" of the

distribution and the first two define its parameters. More


precisely, the shape of the distribution is partly character-
ized by: (1) its third central moment or Skewness which is a
measure of the asymmetry or the distribution, and (2) its

fourth central moment or Kurtosis which is related to its

peakedness. In order to make these two "measures" of the


shape of a pdf independent from its scale, the following
coefficients are defined:

coefficient of Skewness: Bi. 3/21 (3.30)

and
coefficient of Kurtosis: 02 2 (3.31)

When k denotes the kth central moment of a random variable


or:

_ = 1 [ - p 1]k f(R) a (3.32)

Where

f(R) is the pdf of (unreliability or unavailability)


133

p1 denotes the first moment about the origin and is:

R f(R) dR (3.33)

If we denote Pk to be kth moment about the origin or

-k
_k, fo Rk f(R)dR (3.34)

Then we have:
' 2
=
2 2 - (p) (3.35)

1 1 t V 3

u3 =3 - 3P2 (P1 + 2(p1) (3.36)


' ' ' ' 2 ' 4
14 4 - 4P3 + 62(P1) - 3(pl) (3.37)

In cases where the form and parameters of the pdf under

study is unknown, the central moments, k' may be calculated

by replacing k by () Xk in the preceding expressions,


i= 1
where Xi, = i=l,2.--n, are the values of n given observations
(i.e., simulated top event reliability in Section 3.4). Thus,

P2 will read:

I 2 1 n 2(338)
P2 = _n Xi - - ( Xi)3.38)
i=l n i=l

The above equation leads to what statisticians call a


"biased estimate," Therefore, we use the corresponding un-

biased formula which is:


134

n n
n i=l x . (il Xi)2
i= n(n-l)i1X)2(3.39)

to estimate the variance of the distribution.

Hence, if the coefficient 81 and 2 can be obtained, the


shape of the distribution is approximately defined. Fig. 3.17
and Fig. 3.18 give numerical values of the coefficient and
of the various "theoretical" type of densities presented in

Section 3.4.1, and Johnson distributions (see Hahn and Shapiro

(1967)) respectively. From these figures, the type of density

that has the same a1 and 2 with the sought pdf can be
obtained.
135

II
J1 II
I

Fig. 3.17: Region in (8,1 2) plane for


various distributions
(Hahn Shapiro (1967)).
136

OC

0
-~t. P 0


rl -H
XCd

cacd

OJ
CH ,

C-4

t.-

rr~~~~~~

U, Cq en)
H H-4

C.a
II·

_I

Ilu

&i

-i-

a*

li·

?'
·-

ril

·-·
137

CHAPTER 4: APPLICATION AND RESULTS

4.1 Introduction
In this chapter we present a series of examples to vali-

date the effectiveness of the mathematical models developed

in Chapter 2 which have been encoded in UNRAC. As stated

earlier, (See Chapter 3), the UNRAC code consists of three

different parts, namely, cut set generator, unreliability

evaluator, and Monte-Carlo simulator. In Section 3.2 the

effectiveness of the cut set generator partwasdiscussed


by
comparing the UNRAC results with other published data. In

order to validate other parts and the models encoded, several

systems have been analyzed using UNRAC code and the results

have been compared with the other codes and/or published data.

They are:

1) Auxiliary Feed Water System (AFWS) an example from


WASH 1400.

2) An example of an electrical system

3) A Chemical and Volume Control System (CVCS)

4) A High Pressure Injection System (HPIS), an example


given in WASH1400.

4.2 Auxiliary Feed Water System (AFWS), A Comparison with


WASH 1400

The function of AFWS is to provide feed water to the

secondaryside of the steam generators upon loss of main


feed water. In this typical example, the system consists of

three pump trains, two condensate storage tanks and several


138

different types of valves. Figure 4.1 shows a simplified

flow diagram of the AFWS that appeared in WASH 1400 (1975).

The simplified (reduced) fault tree is shown in Fig. 4.2.

Table 4.1 shows the data used in both WASH 1400 and this

study.

The main objective of this example is to show the

requirement of time dependent unreliability analysis for the

standby systems in order to find the unreliability or unavail-

ability per demand. In WASH 1400, the average point estimate

unavailability of the components was used to find the top

event (system) unavailability for a given interval. The


pictorial summary of the results is shownin Fig.4.3.
For this study, Fig. 4.2 was further reduced by using

the techniques mentioned in Section 3.2.2 in order to

eliminate the unnecessary evaluations. The fault tree used

to input to UNRAC is shown in Fig. 4.4. UNRAC generated 145

cut sets and evaluated the average and time dependent unreli-

ability of the AFWS for the interval of zero to one year.

The results showed that the unavailability per demand of the

AFWS (average unreliability over 1 year) was 3.114x 10 .

The maximum unreliability which caused by inspection period

was 4.18x 10 2 and the minimum unreliability resulted after

each repair was 5.786x 10 . Figure 4.5 shows the time

dependent unreliability of the system over the interval of

60 days. As can be seen, the unreliability of the system is

*NOTE: The exact results were given here for possible future
check.
139

0
0
-;r

ci
"4

-H
·la
10

"-I
0

I ____ '4.

ri
"4
P.
140

*
CqZ

fZ
141

elfSTFl
U5Tr nTI-
S."·
wri"TO
VPMITP

Fig. 4.2 Continued


142

1o

0b
K
O N11
IT

mb

Cl

0
'H
g

ri
x
rl
Or
:3
Et 'H

+r0k
CI)
'1 '

i
olt
'9
0 C) H
a
I1f

%
:I

w
;

le

I
143

Cn

'4
LH
0

r4

0
Cd

44
1)

pUv

I,{
U

C4
4-4

q.H

~LO
144

rJ
rI
C-,
Lw
00

u,It

F.rq
145

Table 4.1: AUX-FEED WATER SYSTEM COMPONENT


FAILURE CHARACTERISTICS DATA

FAILURE TEST FIRST TIME FOR TIME FOR VRRIDE RESIDUAL


COMPONENT RATE INTERVAL INTERVAL TESTING PEPAIR UNAVAIL UNAVAIL
NO, (PER HR) (DAYS) (DAYS) (HRS) (HRS)

1 0.0 0.0 0.0 0.0 0.0 0.0 5. 10E-07


2 0.0 0.0 0.0 0.0 0.0 0.0 1.00E-04
3 0.0 0.0 0.0 0.0 0.0 0.0 I.00E-04
4 0.0 0.0 0.0 0.0 0.0 0.0 .00E-04
S 0.0 0.0 0.0 0.0 0.0 0.0 1.00E-04
6 0.0 0.0 0.0 0.0 0.0 0.0 1.00E-04
7 3.0 OE-07 3.00E+01 I OOE+01 1.50E+00 4.20E+00 1.OOE+00 1.00E-03
8 3.00E-05 3.OE+01 1 .00E+01 1.50E+00 4.20E+00 1.00E+00 1.OCE-03
9 0.0 0.0 0.0 0.0 0.0 0.0 .00E-03
10 1.0OOE-07 3.00E+01 1.00E+01 1.50E+00 4.20E+00 1. OOE+00 t.00E-03
11 0.0 0.0 0.0 0.0 0.0 0.0 7.50E-05
12 0.0 0.0 0.0 0.0 0.0 0.0 1. E-02
13 5.10OE-06 3.OOE+01 2.OOE+01 1.50E+00 4.20E+00 1. OOE+00 2.00E-03
14 0.0 0.0 0.0 0.0 0.0 0.0 3.70E-02
15 3.00E-07 3.00E+01 2. OOE+01 1.50E+00 4.20E+00 1, OOE+00 1.00E-03
16 0.0 0.0 0.0 0.0 0.0 0.0 1.OOE-04
17 3.0 OE-05 3.00E+01 2.00E+01 1.50E+00 4.20E+00 1, OOE+O
00 1.OOE-03
18. 3.00E-0S 3.00E+01 3.00E+01 1.50E+00 4.20E+00 1.OOE+00 1.00E-03
19, 0.0 0.0 0.0 0.0 0.0 0.0 1,OOE-03
20 0.0 0.0 0.0 0.0 0.0 0.0 1.00E-04
21 3.00E-07 3.00E+01 3.OOE+01 1.50E+00 4.20E+00 1.00E+00 1,.00E-03
22 0.0 0.0 0.0 0.0 0.0 0.0 3.70E-02
23 5.1OE-06 3.00E+01 3.OOE+01 1.50E+00 4.20E+00 1. 00E+00 2.OE-03
!
146

Pi

H
H

10 20 30 40- 50 60
Time (Days)

Fig. 4.5: AFWS time dependent unavailabilities as calculated


by UNRAC.
147

highly time dependent, and varies by 3 order of magnitude.


This strong time dependence of the unreliability is not

revealed in WASH 1400 where only the average value is

reported. This additional information can be useful in

providing further insight into accidental analysis. The

UNRAC code is also able to evaluate the importance of each

component in producing the top event. The code determines

importance by two methods: (1) Birnbaum's measure, and

(2) Fussell-Vesely measure. Birnbaum's measure is the simple

differentiating method and is evaluated by the following

equation:

(B.I.)i Q(t) = Q(t) - Q(t) (4.1)

qi(t)=l qi(t)=O

where
Q(t) = top event unreliability equation, given in
Eqn. (3.12),

qi(t) = unreliability of component i,


(B.I)i = Birnbaum's importance of component i.

whereas the Fussell-Vesely measure of importance is defined

by:

(F.V.I) = Prob. (cut sets containing component i) (4.2)


Q(t)
148

The reader should note that these two equations determine

different measures of importance. The Fussell-Vesely

determines the criticality importance whereas the Birnbaum's

determines the actual importance of the component (Lambert

(1975)). The UNRAC results for the importance analysis based

on the average component's unreliability is given in Table

4.2. As would be expected, Birnbaum's measure shows that the

single failure (failure of component #1) has the highest

importance, whereas the Fussell-Vesely measure shows that the

common power failure to both electrical pumps (failure of

component #12) has the highest contributions (i.e., the

component which contributes most to the top event).

To find the top event uncertainty bound and the approxi-

mated distribution, three different runs were made. For the

first run we used the average point estimate unavailability

of each component with the error factors on these estimates

given in Table 4.3. The assumed distribution for each compo-

nent involved was considered to be a log-normal. The first

two columns of Table 4.4 shows the top event unavailabilities

with its respective confidence levels. The error factor, which

is calculated by the following equation, was found to be 3.79.

x9 x .50

.5X0.05

where

Xi is the value of top event unavailability with respect


149

Table 4.2: COMPONENT'S IMPORTANCE CALCULATIONS


OF AFWS USING FUSELL-VESELY AND
BIRNBAUM MEASURES

COMPONENT FUSSEL-VESELY BIRNBAUM'S


NO, MEASURE MEASURE

I 1.6192E-03 1 .0000E+00
2 6.3498E-05 2.0000E-04
3 6.3498E-05 2.0000E-04
4 6.3498E-05 2.000OOE--04
5 6.3498E-05 2.0000E-04
6 4.6607E-03 1.4680E-02
7 1.5392E-01 1.4680E-02
8 6.4245E-01 1.4680E-02
9 4.6607E-02 1.4630 E-02
10 1.5061E-01 1.4680E-02
11 5.1001E-03 2.1419E-02
12 6.8001E-01 2.1419E-02
13 2.4262E-02 1 .2867E--03
14 1.5115E-01 1.2867E-03
15 1.3235E-02 1 .,2867E-03
16 4.0850E04 1 .2867E-03
17 5.6086E-02 1.2867E-03
18 5.6619E-02 1.2853E-03
19 6.8001E-02 2.1419E-02
20 4.0807E-04 1.2853E-03
21 1.2988E-02 1 .2853E-03
22 1.5098E-01 1 .2853E-03
23 2.41 39E-02 1 .2853E-03
150

Table 4.3: AUX-FEED IWATER COMPONENT'S


AVERAGE UNAVAILABILITY ERROR FACTOR (EF)

Component EF on Average Component's Unavail-


No. Unavailability ability Distribution

1 30.0 L*

2 3.0 L
3 3.0
L
4 3.0
L
5 3.0
L
6 3.0
L
7 6.0
L
8 3.0
L
9 10.0
L
10 3.0
L
11 30.0
L
12 3.0
L
13 3.0
L
14 3.0
L
15 6.0
L
16 3.0
L
17 10.0
L
18 10.0
L
19 10.0 L
20 3.0 L
21 6.0 L
22 10.0 L
23 3.0 L

*L stands for Log normal


151

4JH
4) :3 '-d00000000000000000
4) coaI
. ,:t, -t, ,
I I I
,
I I
,
I
,.
I I I I I
,
I I
mI
m
I
m
i
0z
*H > W P4 P4 p4 P4 W W W4 W w 4 p4 p p w p p
Ln H H - O 00I N O 00 0 0 N N
E- 4- 4w
*o
A
Ln
c,
0n
OD "
n" OiUn Ln
oOc0C
cH
O
Ln
Ln
c0o (m r-
OL
ri v
Ln
%o
n Ln vi r d*
tn 0 " mM t co O o. .n f 4 ) - C r H
m az
a) ai,.
HH H NN * * ,
n i
wvDb .
eH N .
L f
.

C/)
4)
H 0
H-iC/3
Cd
a- > Ln Ln w- t t qmtt q t t- tn tn m m m -
-.4

-4 O ~~~~~ r-~o ~ - 0i O
, .... p~~~~ppp~~~ppp
0Ng-
z
p4 +.. t/szn - IF · Ln
I 9tI qI t tI
X
4j
,
I e*t
R* I II ,
I t I I qd
lid I rn
I tnI I rn
M I teI VI
Cd O CD CD O Ol O O CD CD O CD CD O CD O O O O
E- 4 Ln I I PI co I) ID I I- CD I t-I 0I I I I I
x 0a Mt %o (D oot an m _I _I 1 n - 0
o4 o O M VI CD Ie
at CD m r- m m C) m
N %D
o
on b O~ H N N e L ) m
mHO - o .i H n w
.. i ,
........ i _ , _
.... i mn

U
PA

H0
E-4
H 19t It Mt It q m "t Rd tn tn n be
p4 n
H ler F-i O CD CO O CD O CD O) O O CD CO ° CO O O CD
) Cd I I I I I I I I I I I I I I I I I
+ >
Cd
w
,-
w
oD r-
w1 w1 w
wx r-
w
t,
w1 w
o a
w
oD r-
w w: w
o
wL wL w
r cn
w1 w x
p4 Ln Ln r- O
W4 4.i

4 C4 p4 14 pfi p4
Pz
a4
3
4
uC0 P
4
LL4p el p4 p4 4 p Lp4

Cd aC a- an c a ~ ~ a ~ a; a; a ~ a ~ a ~. a
0
1
Cd
u EL H- 0r0H-O"
- . * L* otl
00t m. mfH~L)
0.
g-4 .
cdpOo~
* *
H Hn o N
* .
o) o
.
of o
* *
o- o~ o Hn o
. . .
iN m
'r
.0

~0
Cd
H
:j W
U
. ',. .0
p4
+K
152

to ith confidence level.

To approximate the top event distribution, the method

discussed in Section 3.6 was used. By using Fig. 3.18 and

the values of 81 and 82 which were 11.5 and 20.3 respectively,


the top event distribution is bound to be a Johnson SB distri-

bution with the following general equation (Hahn and Shapiro

(1967)):

)
Xx+e ] 22)
(X
,/- (x-e).(A,-x+e)-expp 1[y+;n(Xe
Y (4 4)

g<X< a+ , -a <Y <+ , >0, - < <+c

Since x is the unreliability value, it is boundedbetween


zero and one, hence; x=l and e=O. To find the other para-
meters, y and n, the following set of equations were used:

Z- Z
= (x. ) x+- - x) (4.5)

= Za
In-X
n
a -
- x

where a = percentile > .50

Za is the alOOth percentile for a standard normal


variate

n = are the estimated values for n and y


153

x = is the alOOth percentile for the calculated data

Substituting for , , n and y in Eqn. (4.4) we get:

f(y) dy = 1 e 2 dy, (4.6)

-o <y < + o

where:
Y + in l-x

Column 4 of the Table 4.4 also summarizesthe estimated


values of top event unavailability using approximated distri-
bution given by Eqn. (4.6) with i and evaluated at

a = 0.95. Column 3 of Table 4.4 gives the top event distri-

bution assuming a log-normal distribution with the median of

6.0310E-04 and an error factor of 3.97. The latter approxi-

mation to the top event distribution was used in WASH 1400.

Figure 4.6 clarifies the difference between the actual results

and the above approximation distributions results. As can be

seen, for the majority of the data the simple log-normal

distribution can approximate the top event distribution very

well. This figure supports the validity of the WASH 1400

assumption of log-normal distribution for the whole system.

However, the distribution that best fits the top event is

found to be the S B Johnson distribution with the following

parameters: X= i, = 6.48730E-5, rn=1.1446445, y= 8.6154013.


154

a
0
o
0
:3
14
e
H

:>
ca
3o
04J
.0

_ 0
° ot
o 0
S4,
_
ago,_~

'C

WV3
. r4vH
vi WrH

I o
.

IJ i

'..

4
Nr4

ato q I g V} ' T co P % ml - e Cl I
I 0

,X:I~qelTAeufl uaA3dol
155

The above values were found by matching 25th, 50th, and 75th

percentile of top event with the fitted distribution (of.

Eqn. (4.4)). It is worth mentioning that the value of e was

+
found by the following equation:

A*A

(x 0 5 )(E+X-xa) (Xa- E)(A+- X 5)


' ~ ' (4.7)
y - 0.5
X ) (Xl-a - ) ( + X )(X0.5 - )

where
A = assumed to be 1

a = is the same parameter as in Eqn. (4.5)

For the second run we used error factors on the indi-

vidual component's characteristics (i.e., failure rate,

repair rate, etc.). Table 4.5 shows the data used to

evaluate the top event uncertainty bound. The error factor

of the top event was found to be 3.654. The evaluation was

carried out by simulating each component's characteristics

distribution with Monte Carlo sampling and finding the

average unavailability of each component at every iteration

to calculate the top event values. Again the simple log-

normal distribution seems to approximate the top event

quite well (See Fig. 4.7).

For the third run, first the error factor on some of the
component's parameters data were changed to see the effect of

the longer tail for log-normal distribution on the top event.

-mI
156

Table 4.5: AFWS COMPONENTFAILURE CHARACTERISTICS


ERROR FACTORS DATA

COMPONENT
NO. LAMOA DIS TC DIS TR 015 ORESID DIS
1 0.0 0.0 0.0 30.000 L
2 0.0 .0.0 0,0 3.000 L
3 0.0 0.0 0.0 3.000 L
4 0.0 0.0 .0.0 3.000 L
5 0.0 0.0 0.0 3.000 L
6 0.0 0.0 0.0 3;000 L
7 3.000 L 3.000 L 3.000 L 6.000 I
8 3.000 L 3.000 L 3.000 L 3.C00 L
9 0.0 0.0 0.0 10.000 L
10 3.000 L 3.000 t 3.000 L 3.000 L
11 0.0 0.0 0.0 30.000 L
12 0.0 0.0 0.0 3.00-3 L
13 3.000 L 3.000 L 3.000 L 3.000 L
14 0.0 0.0 0.0 3.000 L
1s 3.000 L 3.000 L 3.000 L 6.000 L
16 0.0 0.0 0.0 3.000 L
17 10.000 L 3.000 L 3.000 L 3.003 L
18 10.000 L 3.000 L 3.000 L 3.000 L
19 0.0 0.0 0.0 10.000 L
20 0.0 0.0 0.0 3.000 L
21 3.000 L 3.000 L 3.000 L 3.000 L
22 0.0 0.0 0.0 i O. 000 L
23 3.000 L 3.000 L 3.000 L 3.000 L

The values in columns 2,4,6 and 8 are the error


factors on failure rate, average test time, average
repair time and residual unavailability of each com-
ponent, respectively.
157

ao
_ __ __ _____I_ __ 7
I . ' I I I I I I
I I I a' an

0
0.
x co
oa 4i
04 oo

04 4r 0o 0
0% U
I..
Jt4 q .9

a
0 0% W
Oi
. e0
eo
. -Y
tna cn1.

I w o01.- .,o
4i jE
68 go

o n
b Q =I O
o 2h

O C

0" 0
0

I tJ0
n o

be

,4
I I a I
I
II I I 1I I I IL Li
in -4 &4 0 Q
P4

AhlTeq'teeuun ita3g do£


158

Second, it was assumed that the mean test and repair time

follow a gamma distribution. Then the top event unreliability

was evaluated for 1200 and 5000 Monte-Carlo iterations for

both cases. Table 4.6 summarizes the input error factors for

this run, and Table 4.7 and 4.8 give the top event unrelia-

bility computations. The results show although there exist

differences between the values of the top event unreliability

for 1200 and 5000 iterations, the 1200 iterations will suffice

for the analysis.

In another run, the importance of the components in the

system were evaluated based on the assumption that electric

pump number 1 is down, i.e., under the test at 721 hours.

Based on the above assumption and knowing that when one

pump is down the components in that pump train are ineffec-

tive for delivering water, the UNRAC regenerates the new

minimal cut set by discarding all the failed and ineffective

components. This capability is provided in order to make the

code more versatile in importance analysis while preserving

other features of the code and, in addition, the original

system fault tree is only required input to the code for all

the subsequent evaluations. The code internally changes the


system configuration by the relevant data that user will

supply as input for the intended job. The results of the


importance analysis at 721 hours of operation is shown in
Table 4.9, As can be seen, at this time of operation, the
turbo-pump is the dominant component in the top event
159
fr

Table 4.6: AFWS COMPONENT FAILURE CHARACTERISTICS


ERROR FACTOR DATA, AN EXAMPLE FOR LONG
TAIL
- LOG-NORMAL
-- AND GAMMA
-- DISTRIBUTIONS

COMPONENT *
NO. LAMOA DIS TC DIS TR DS QRESID DIS

1 0.0 0.0 0.0 30. 000


2 0.0 0.0 0.0 10.000
3 0.0 0.0 0.0 10. 000
4 0.0 0.0 0.0 10.000
5 0.0 0.0 0.0 10. 000
6 0.0 0.0 0.0 10.000
7 10.000 L 3.000 L 3.000 L 10.000
8 10.000 L 3.000 L 3.000 L 10.000
9 0.0 0.0 0.0 10.000
10 10.000 L 3.000 L 3.000 L 10.000
11 0.0 0.0 0.0 30.000
12 0.0 0.0 0.0 3.000
13 10.000 L 3,000 L 3.000 L 10.000
14 0.0 0.0 0.0 10. 000
15 10.000 L 3. 000 L 3.000 L 10.000
16 0.0 0.0 0.0 10. 000
17 10.000 L 3.000 L 3.000 L 10.000
18 10.000 L 3.000 L 3.000 L 10.000
19 0.0 0.0 0.0 10.000
20 0.0 0.0 0.0 10.000
21 10.000 L 3.000 L 3.000 L 10.000
22 0.0 0.0 0.0 10.000
23 10.000 L 3.000 L 3.000 L 10.000

*TC and TR distributions are changed to gammna


distribution for
calculating the values in Table 4.8.
160

Table 4.7: THE EFFECT OF THE MONTE-CARLO


SIMULATION TRIALS ON THE TOP
EVENT DISTRI BUTION

I III

Level Simulation Trials


of
1200 5000
CONFIDENCE TOP EVENT VAL TOP EVENT VAL
0.5% 7.5631E-05 6.8032E-05
1 .0% 8.6380E-05 8,1885E-05
2.5% 1 0510E-04 1.0826E-04
5.0% 1.3432E-04 1.3495E-04
10.0% 1.8046E-04 1.8212E-04
15.0% 2.2543E-04 2.2493E-04
20.0X 2.6124E-04 2.66665E-04
25.0% 3.0705E-04 3.0928E-04
30.0% 3.6497E-04 3,.5501E-04
40.0% 4.8247E-04 4.6209E-04
50.0% 5,.9415E-04 5.8590E-04
60.0% 7.6224E-04 7.5284E-04
70.0% 1.0068E-03 1.0201E-03
75.0% 1,.2071E-03 1.2202E-03
80.0% 1.4576E-03 1.4824E-03
85.0% 1.8216E-03 1.9242E-03
90.0% 2.5920E-03 2.6107E-03
95.0% 4,.3103E-03 4.3677E-03
97.5% 5.8253E-03 6.8399E-03
·99.0% 1.1847E-02 1.2950E-02
99.5% 1,.9284E-02 1.9408E-02
.
i i _ ] ii _ iiii .. - -
161

Table 4.8: THE EFFECT OF THE MONTE-CARLO


SIMULATION TRIALS ON THE TOP
EVENT DISTRIBUTION 'FOR MIXED
COMPONENT'S FAILURE DISTRIBUTIONS

I 11 . J I - -- --- -r

Simulation Trials
Level
of 1200 5000
CONFiOENCE TOP EVENT VAL TOP EVENT VAL
0.5% 8.4943E-05 7.6879E-05
1.0% 9.8884E-05 8.8165E-05
2.5% 1.2106E-04 1.1801E-04
5.0% 1.5033E-04 1.4395E-04
10.0% 1.9731E-04 1.8879E-04
15.0% 2.2975E-04 2.2688E-04
20.0% 2.6790E-04 2.6786E-04
25.0% 3.1991E-04 3.1454E-04
30.0% 3.5748E-04 3.5738E-04
40.0% 4.6741E-04 4.6144E-04
50.0% 5.8465E-04 5.8958E-04
60.0% 7.8831E-04 7.6327E-04
70.0% 1.0462E-03 1.0250E-03
75.0% 1.2104E-03 1.1997E-03
80.0% 1.4565E-03 1.4865E-03
85.0% 1.7495E-03 1.8652E-03
90.0% 2.4342E-03 2.5818E-03
95.0% 4.2475E-03 4.2737E-03
97.5% 6.5750E-03 6.9437E-03
99.0% 1.2499E-02 1 .2298E-02
99.5% 1.6695E-02 1.6695E-02
i i . i i i
_

_
162

Table 4.9: IMPORTANCE ANALYSIS OF AFWS


COMPONENTS GIVEN THAT ONE OF
THE PUMP TRAIN IS OUT OF SERVICE
- $ ,.

COMPONENT FUSSELL-VESELY BIRNBAUM'S


NO, ,tASURE MEASURE

1 4.524 E-04 1.0000E+00


2 1.7744E-05 2.0000E-04
3 1.7744E-05 2.0000CE-04
4 1.7744E-05 2.0000E-04
5 1.7744E-05 2.0000OE-04
6 5.3813E-03 6.0653E-02
7 6.1554E-02 5.0653E-02
8 8.2237E-01 6.0653E-02
9 5.3813E-02 6.0653E-02
10 5.6393E-02 6.0653E-02
11 1.2359E-03 108574F.-02
12 1;6479E-01 1.8574E-02
13 5.3087E-02 1.8574E-02
14 6.0973E-01 1.8574E-02
15 1.7663E-02 1.6574E-02
16 1.6479E-03 1.8574E-02
17 1.34885E-01 1.8574E-02
19 1.6479E-02 1.B574E-02
163

unavailability with the value of 8.2237E-01 (See Fussell-

Vesely measures), where as Birnbaum's measure again predicts

that the single failure has the highest importance.

For the final run in this section, the independent fail-

ure of the components in each pump train were assumed to have

an Exclusive OR (EOR) gate logic. In fact, this is the

realistic assumption, because when the manual valve or

check valve fails in the pump train, that train will be

unable to deliver water. In the Table 4.1, we have initial-

ized 3 components of each pump train to be a periodically

tested component. Therefore, in their operational history

they are unavailable with the probability of 1 while under

the test. Hence, for the top event evaluation using the

general minimal cut sets we may overestimate the unavail-

ability of the system.To avoidsuchevaluations, those


ORgates that have periodically tested components should be
replaced by EOR gate. The EOR gate will put the gate output

to 1 when one of its input has occurred. While for the case
of OR gate the output value is greater than 1. To check our

system for such possibility, the independent failure of each


pump train was assumed to be interconnected by an EOR gate.

Then the technique mentioned to transform the EOR gate to

simple AND, OR and NOT gate (See Fig. 3.3), were used.

The final results showed that the peak values shown in

Fig. 4.5 are overestimated by a factor of three. It is worth


mentioning that, care must be taken in preparing the new fault
164

tree with EOR gate. For example, in the data for AFWS we

scheduled the simultaneous testing for the periodically

tested components in each pump train, i.e., in turbine

pump train. The pump, the manual valve, and solenoid valve

which open the steam, are tested each month at the same

time. Now if we use the EOR gate for these three components

then the output value for the time of the test will be zero

instead of real value of 1. The actual value of the maximum

unreliability during each testing period is also shown in

Fig. 4.5 (symbolically shown by o on the figure). As stated


earlier, these points are a factor 3 less.

4.3 An Example of an Electrical System, Comparison of UNRAC


with FRANTIC and BIT-FRANTIC
Figure 4.8 shows a simple electrical system used by

Modarres (1979). The main function of this system is to

provide light when the switch is closed. Relay No. 1 is a

normally open (N.O.) relay and its contacts are closed when

the switch is open. Relay No. 2 is a normally closed relay

and it will be deenergized (contacts will be opened) if the

switch is closed. Figure 4.9 shows a fault tree of Fig. 4.8

which neglects operator error, wiring failures and secondary

failures. Table 4.10 shows the arbitrarily selected data

that has been used to analyze the system in Modarres (1979).

First, the system structure function was formulated

and provided as input to both FRANTIC (Vesely and Goldberg

(1977)) and UNRAC. These results termed "exact" for the two
cases are shown in Fig. 4.10. Second, to compare the results
165

of UNRAC and BIT-FRANTIC (Karimi and Wolf (1978)), the fault

tree was directly input and the first term approximation

(rare event) (cf. Eqn. (3.13)) was used to find the top event

unreliability. Figure 4.10 also summarizes the results of

these runs (1st term). As can be seen, UNRAC and FRANTIC

exact results are only comparable where the unreliabilities

are small, i.e., less than -10 2 . The results of UNRAC

(Exact) and UNRAC (lst term approx.) are consistent to a

higher degree of unreliability than the former case. The

difference between UNRAC (Exact) and UNRAC (Ist term approx.)

at the end of each interval are caused by the approximation

used in evaluating the top event. The rare event approxima-

tion results in a conservative value when the average

unavailability of basic components are on the order of 10'1

or higher. The results of BIT-FRANTIC (1st term approx.)

are more conservative; however, it is believed that the

UNRAC results are more realistic.


166

rower
supply 2

Fig 4.8: An example of an electrical system


(Modarres (1979))
8 9 1

Fig 4.9: A fault tree diagram of the example


electrical system
168

cl I

::
a>, -.

x .r *) O OC 0O u L/3 0 Lu Li3 O

>
w
P, -ri (4
, -1 r-4 N rt r- N .- ¢ t4)
FI

C:
Ht

u
C4
H
w I·a)
Uv -'-. o un
o tn, o uM
*f ol u
a) . . . . . . . . .
z O
·
II
O.4 r 0;
O4 0
C - - CD1 r- 0
C rN-
M,
C .11
[-.,
0

H rQ

E-

.I ..--~:_
i~ ·
*Ha)C
4U) N N-I rna N N N N N
0 ) C

r~

0
v-I
r . 0 0 0 0

-'-
169

1. ~~~~~~~~~~~~~~~~~~~
~~~ / - ,._

9
.I - -
- !

8
7
6 /
/
5

4 /
J /
3 /a
/

2
/ 7.7 / /..---

7/
9
8
I
7
6
I

-
r-- Ia
H 5
4
0
5
3 : / /* - UNRIAC (EXACT)

2 - UNRLAC(st. Term)

.- ITT '-FRANTIC(Ist, Tern)

---- FRANTIC (EXACT)


10
9
8
7
6
a I I I I I I I I I I I _·
4 8 12 16 20 24 28

Time (Days)

Fig. 4.10: Comparison of time dependent unavailabilities of the


electrical system as calculated by UNRAC, FRANTIC and
BIT-FRANTIC.
170

4.4 A Chemical and Volume Control System, (CVCS)

The Chemical and Volume Control System provides a means

for injection of control poison in the form of boric acid

solution, chemical additions for corrosion control and reactor

coolant cleanup and degasification. This system also adds

makeup water to the reactor coolant system (RCS), provides

seal water injection to the reactor coolant pump seals,

reduces the inventory of radioactivity in RCS and can be used

as a high pressure injection system. The typical system under


study has three pump trains which have to pump 44 gal/min. of

the purified water from the volume control tank to the reactor

coolant pump seals and RCS. Figure 4.11 shows the simplified

pump trains of the CVCS. Each pump can only deliver a maximum

of 44 gal/min. under the design condition. Therefore, we shall

assume that two pumps are needed to meet the design requirement

of 44 gal/min. In addition, safety regulations require that

two out of three pumps be available except that technical

specifications do permit two pumps to be down for a period of

not more than 24 hours. Hence, in our analysis we defined the

top event failure to be the simultaneous unavailability of any

two pump trains. Figure 4.12 shows the simplified fault tree

diagram of the system. In order to be able to compare the


results of UNRAC with BIT-FRANTIC, the pumps were modelled

to follow a periodically tested component. In other words we

assumed that each pump undergoes a thorough test and repair

after each T 2 days, and if any pump fails between the test
171

U
tJ
ao
.rH

U
.-
.C:
ri

.z4
cor

r_

Cd

P-

zZ

(n(

C4
Crl

4-
r-

Hn
ar'
H'
CD
ED

pa
-o

rt

CD
.
H.

·e
vl
173

I..

.
r.
n
0
0
I-
A.
174

interval it will be restored without changing its failure

probability. To analyze the system under such an assumption,

the data shown in Table 4.11 is used. Figure 4.13 gives the

time dependent unavailability calculated by UNRAC and BIT-

FRA4NTIC. The difference between the results of these two

codes arises from the use of the rare event approximation in


FRANTIC.

In another example it was assumed that the pumps are

normally working during operation (not in stand-by). In this

case, the pumps are unavailable if they are down because of

repair or test. To analyze the system unreliability it was

assumed that there is a 20% chance that the pump needs to be

tested if it fails, i.e. P3 = 0.2 (See Fig. 3.8) and also there

is a 20% chance that the pump trip was spurious and therefore

the pump can be restarted, i.e., P 1 = 0.2. The rest of the

data given in Table 4.11 remains unchanged. The results of

the top event unavailability for two types of repair distribu-

tions, namely exponential and 2nd order Erlangian are sum-

marized in Table 4.12.

Having analyzed the CVCS pumping system by one of the

above methods (either assume that the pumps are periodically

maintained or usually running (monitored)), one can check the

design requirement for the system availability. For example,

in the first method, the unavailability per demand of CVCS

pumping system is 3.08x 10 1 or 1 out of three times the

pumping system cannot meet the design requirement. If one


175

-JJ mI',qlaVOntmvcq' V'Ln'


000O00000000O00000OO
D I I I I I I I Ia I I I I I I I I I I
O-C
4
) o000
wwwwwwwwwwwwwwwwwwAUW
00 000000000000o
oooooooooooooooooo000000000000000000
: -C
. ° y .... C_ V

U)
a..
LL Cl! 0O
00.
W
0 3*3
W
0 ..W
>. 0o00000000000000000
8 0
o + + +

o o 0
00 + + +

4> 0*. 0a . . . . . . 0. . . . . . . . . . . .
I4 Ul 0 O- 0 0
WM 0000000000000000000
+ ® ® + e ee e+

W-a 0 0

I I I
1^Z1
oa _ IO
U hi
C
O
a
a
a
3JWaUWx hi
0
WI
0*
h
0(

l ~u.
a· w .
H ZO

WU
Ev ~ E~
Q
cx~~~~rmO~
176

1.0
9
8
7
6
5
4

EH 2

.3

10-1
: 7
~Z 7
6
5
4

10-2
9
8
7
6
5
10 20 30 40 50

Time (days)
Fig. 4.13: CVCS time dependent unavailabilities as
calculated by UNRAC and BIT-FRANTIC
177

Table 4.12: COMPARISON OF THE CVCS UNAVAILABILITY


-
CALCULATIONS FOR THE EXPONENTIAL AND
2ND ORDER ERLANGIAN REPAIR DISTRIBUTION

Repair Distribution
2nd Order
'Iop Event Exponential Erlangi'an

Maximum
Unavailability 6.8817E-04 1.7144E-03

Average
Unavailability 6.783E-04 1.705E-03

Time to reach to
its asymptotic
value Less than 4 days of operation
178

uses the second method, the maximum unavailability per demand

will be on the order of 1.7x 10 3 It is worth mentioning

that, the above results are typical numbers and cannot be

applied to any CVCS system without validating the data and

the model used.

4.5 A High Pressure Injection System (HPIS), Comparison of


WASH 1400 and UNRAC

The main purpose of this example is to show the applica-

bility of UNRAC for a large fault tree and the time saving

that can be achieved by using the techniques mentioned in

Section 3.2.2 for reducing the fault tree prior to quantifi-

cation. The system chosen for this study is taken from

WASH 1400, Appendix II. For the system description the reader
is referred to the above reference.

Figure 4.14 depicts the simplified flow diagram of a

High Pressure Injection System (HPIS). The original fault

tree of the HPIS has more than 8000 minimal cut sets (Garibba

et al. (1977)). Since at the present time UNRAC is limited to

3000 minimal cut sets, the original fault tree was further

reduced by neglecting the events with negligible unavail-

ability contributions and combining some of the basic events.

The resultant tree, shown in Fig. 4.15, consists of a total


of 90 basic events and 57 gates. UNRAC generates a total of
1718 minimal cut sets, in which there are 11 with single fail-

ure, 104 with double failures and 1603 triple failures per
cut sets.
179

WASH 1400 results for single and double failures are:

Qsingles= 1.1 x 10 - 3

Qdoubles = 2.5 x 10-3

The UNRAC results using the data given in WASH 1400 for single

and double failures per cut sets are: 1.2x 10 3 and 2.6x10- 3

respectively. The results are therefore quite consistent

with WASH 1400.

In another run, the failure of delivering water from

each pump was considered to be a single basic event. This

time the code generates a total of 99 minimal cut sets in

which there are 11 with single failure, 79 with double fail-

ures and 9 with triple failures per cut sets. The results of

this run was exactly the same as the previous run for single

and double failures.

The CPU time for the second run was a factor of 8 less

than the first run. This reduction in CPU time was the result

of combining the basic events input to each of the three OR

gates, (FCPA, FCPB, FCPC), shown in Fig. 4.15 (sheet 6) and

representing them by one new event. In other words, FCPA,

FCPB, and FCPC were considered to be the basic events and

Fig. 4.15 (sheet 6) was ignored. This action greatly reduced

the number of triple failures and, therefore, results in a


smaller number of minimal cut sets.
180

0
0

".e
I
CA

".4
0
-H
t
l1

00
".4
m,I
ee

H
ra

I I I
I
I
I . .
181

0o
0o
*tt

4cn
w
W

0
'0
0
U
0u

m
Cl
ww

o
0
41
I44
0
0
0
14
co
no

'0
0
0-i
'.4
.

ft
I
la
0
44
'0
il &I 0.
tHa 'I -41
I
,111 i
i il
I

I I Iii!' iI .4
a l
f r-A
-H
I I 'i 0T4
I Li
1 i33
I i
I !it!~il I!1 i
I'I,
a

4*4 64 *

.
182

I1

1
3

U,

0;

'u-I
v34
183

Qrn

ii
Y

Mo
_ _
a

_I

I en
44
0
0
Pc

,
l %.O
a w
0
4Co
W
4)
_ _ r,
_ _ r
IA,

-1
00
1*

Vz
rX4
184

e
Ol mf
2 II.,Y

* a a a a
S.
..

*t

I
iI,

. XJ

4-'

.4i
4)
a)
'AJi I
Pd

1-
0
C-
,I

00
·uii
T

i -H
r:4
185

Ii

Us
Un

4l
0
0

'0
40)
a
ta
*r

u
C.)

a
In
r--
'A

aO
.
F4
186

I
Xe a
> 3

'tI
;I.
'-
yo a
eg.
W-
P- I
p-

I
W

s-

i St
9:b

".1
'A

Ud
5
W.

t~
ul 4-i
Al a w
4D
.-I

4)

0
a 1qJ
13
a0

Lr
u0
o
Xj U
a

.-H
r4·Z4
187

CHAPTER 5: SUlARY, CONCLUSION, AND RECCtMENDATIONS

5.1 Summary and Conclusion

Reliability analysis is a method by which the degree of successful

performance of a system under certain stipulated conditions may be

expressed in quantitative terms. In order to establish a degree of suc-

cessful performance, it is necessary to define both the performance

requirement of the system and the expected performance achievement of the

system. The correlation between these two can then be used to formulate

a suitable expression of reliability as illustrated in the following

figure, Green and Bourne (1972).

The first step in a reliability analysis, therefore, is to ascertain

the pattern of variation for all the relevant performance parameters of

the system both from the point of view of requirement as well as from

likely achievement. Performance variations may be due not only to the

physical attributes of the system and its environment, but also the

basic concepts, ideas, and theories which lie behind the system's design.

Having established all the appropriate patterns of variations, the system

should be rigorously examined to check its ability to work in the required

way or fulfill the correct and safe overall function. In order to assess
188

the appropriate probability expression for each pattern, it is useful to

convert the system functional diagram into a logic sequence diagram.

There are two ways to develop the system logic diagram.

1) positive logic

2) negative logic

The positive logic is called reliability block diagram, whereas the

negative logic has acquired different names depending on the types of

logic interconnections. They are

1) fault tree

2) event tree

3) cause consequence chart.

Reliability analysis, by using method of fault tree is the most well-

known analytic method in use today. Event tree and cause consequence

chart (CCC) have also been used; however, by proper rearrangement, one can

easily map the CCC into event-tree and fault tree combinations and both

can be mapped into a set of fault trees for which the top events are the

consequences of the CCC or event tree.

Reliability analysis by using the method of fault tree is known as

Fault Tree Analysis (FTA). In FTA, since the system structure logic is

composed of a series of negative (failure) logic, the term reliability is

always replaced by the term "unreliability" and/or "unavailability". The


first step in any FTA is to generate the minimal cut sets which qualita-

tively delineate the paths of system failure through a simultaneous fail-

ure of certain components. The second step in FTA is to find a quanti-

tative value (unreliability or unavailability) for the top event and

evaluate the response of the system unavailability to the change in data


189

and error bounding of the basic components.

In the past decade a number of codes have been written for qualita-

tive and quantitative analysis of the fault tree, and complete analysis

requires a combination of several of these codes. Thus, as FTA becomes

more widely used it is highly desirable to have a single code which does

a complete analysis. The advantage of such a single code is that it


reduces the chance of likely error in the input and output process inher-
entin using multiple codes and it is for this reason that the present
study has developed such a code. The objective of this thesis was

to develop a code which could generate the minimal cut sets, evaluate

the system unavailability (point estimate and/or timedependent),


find
the quantitative importance of each components involved, and calculate

the error bound on the system unavailability evaluation. To develop

such a code various aspects of the methodologies used in REBIT[WOLF


(1975)],
KITT IVesely (1970)], FRANTIC (Vesely and Goldberg (1977)], SAIPLE

[WASH 1400 (1975)], and LIMITS [Lee and Salem (1978)] codes were used as

the basis for a single code package.

For generating the minimal cut sets, a portion of REBIT which is

called BIT was used. BIT, at the beginning, was limited to 32 components

and gates and would only accept AND and OR gates. For this study, it has

been modified to accept any kind of gates and its capability has been

increased at the present time to 250 components and/or gates. The cut set

generator, BIT, was bench marked against MODCUT [Modarres (1979)] and WAMCUT

[EPRI (1978)]. It was demonstrated that the BIT code is an efficient

and less time consuming code when generating all the minimal cut sets

(cf. Section 3.2). To accelerate the code even further a discrimina-


190

tory procedure based on the cut set size is implemented in the code.

This procedure eliminates cut sets larger than a given size.

For the quantification, a general and consistent set of mathematical

models for calculating component's unavailabilities was developed and

FRANTIC code was restructured to accommodate these models. To evaluate

the importance of each component in producing the top event, the methods

of Birnbaum and Fussel-Vesely were implemented. It is believed that both

methods are necessary in judging the importance of each component.

For error propagation analysis, a routine that allows this simulation

(Monte-Carlo) to be carried out on the component's failure characteristics

(failure rate, repair rate, test rate, etc.) as well as the average

components unavailability was developed. In addition, the routine was

written in such a way that one can use different distributions for the

component's failure characteristics. To rank the top event unavailabilit-

ies in order of their magnetudes to develop a distribution for this param-

eter a fast procedure similar to the one used in LIMITS code has been

employed.

All the above mentioned routines (i.e., cut set generator unavail-

ability evaluator and Monte-Carlo simulator) were coupled and the

combined routine is called UNReliability Analysis Code (UNRAC). The

UNRAC code was bench marked against BIT-FRANTIC [Karimi andWolf (1978)},

KITT IVesely (1970)] and PL-MODT [Modarres (1979)] codes. It was

demonstrated that UNRAC produces results closer to the KITT results than

either BIT-FRANTIC or PL-MODT, (cf. Figure 3.15).

To make UNRAC more versatile, the following features, which are

unavailable in any codes in use today, were implemented;


191.

1) Using a three state model (cf. Figure 3.9) to represent a

normally operating component in order to allow probabilities for

revealed faults, non-revealed faults, and false failures to be

incorporated in unavailability calculations.


2) Incorporating different distributions for the repair time den-

sity function in order to analyze the effect of different repair

policies on the system unavailability. One such example was

done for a periodically tested component and it was found out


that the constant repair distribution predicts a higher value
for the unavailability per demand than exponential repair

and both predictmore than a lognormal distribu-


distribution

tion (cf.: Tables 2.1 and 2.2 and Figure 2.9).

is the capability of
Anotherfeature that was incorporated in UNRAC
finding the importance of the componentsgiven that a componentor a
part of the system has already failed. This is an important measure for
the operation managmentin order to be able to recommendcertain proce-
dures for the maintenance of a componentor a subsystem by highlighting
the important componentsthat should not be perturbed.
The capabilities have been demonstrated
and models encoded in UNRAC
through a series of examples. Among them was the comparison of UNRAC and

WASH 1400 for AFWS. Although WASH 1400 was unable to reveal the strong

time dependence of the AFWS unavailability it did report the extreme

points unavailabilities which were quite consistent with time dependent

analysis. UNRAC estimated the maximum and minimum unavailability to be

1.4x10-2 and 5.2x10- 5 whereas WASH 1400 reported the values of 2.2x10 2

and 3.5xlO - 5 respectively (cf. Figures 4.3 and 4.5). It was also found
192

out that the simple lognormal distribution is a good estimate for the

top event distribution (cf.. Figures 4.6 and 4.7). This finding supports

the validity of WASH 1400 assumption of lognormal for the whole system.

Finally from the experience gained during this study we feel that 1200

iterations for Monte-Carlo simulation are adequate for most cases and

distributions.

5.2 Recommendations

In the course of this study, it was realized that a minimal cut set

generator code requires a large amount of storage capacity. UNRAC

requires a total of 800 K byte of storage capacity for all the routines

encoded. More than half of this storage is used for generating up to

3000 minimal cut sets with a maximum of 30 components per cut sets. How-

ever, this storage requirement could be decreased if a more efficient

method of storage and access could be used. One of the methods that

might be used is a scratch tape for temporarystorage of the variables in


the cut set generator subroutine. Another method that mightbe helpful
is the randomaccess storage routine.
The unavailability evaluator in UNRAC is based on the assumption of

exponential failure time density function. This distribution cannot

accommodate the break-in or wear-out period. Therefore, for better

prediction of time dependent unavailabilities distributions like gamma

or weibull might be more suitable.

Common-mode/common-cause (cm/cc) failure and its prevention has

been a serious concern in the reliability and safety analysis of nuclear

systems during the past few years. Several codes have been written to
193

identify the (cm/cc) candidates and analyze the system quantitatively.


Among these, some require a minimal cut set generator to pre-process the

fault tree, i.e., COICAN [Burdick, et al (1976)], ACFIRE (Cate and

Fussell(1977)] and some have already coupled with a cut set generator,

i.e., CCAN II [Rasmuson, et al (1978), BACFIRE II (Rooney and Fussell

(1978)] and SETS (Worrell and Stack (1977)].

The cut set generator in UNRAC, (BIT), uses bit manipulations for

generating and storing the minimal cut sets in the machine word. This

feature of UNRAC would make the code very useful for identifying the

(cm/cc) candidates. However, before any implementation of the (cm/cc)

candidates identifiers a review of the reports by: Rasmuson, et al

(1977), Fleming and Raabe (1978) and Edwards and Watson (1979) are

highly recommended.
·L

ii

i-

a;

u··

R··-

iii·

i;-

il-
1.94

REFERENCES

1. BARLOW R.E. and HUNTER L.C. (1961), "Reliability Analysis

of a one-unit System" Operation Res. V.8.No.l pp 200-208.

2. BARLOW R.E. and LAMBERT H.E. (1975), "Introduction to

Fault Tree Analysis" Presented at the conf. on Reliabi-

lity and Fault Tree Analysis, SIAM 1975, pp 7-35.

3. BARLOW R.E. and PROSCHAN F. (1965), Mathematical Theory

of Reliability, John Wiley Sons.

4. BARLOW R.E. and PROSCHAN F. (1975), Statistical Theory

of Reliability and Life Testing, Holt Rinehart and Wins-

ton Inc. New York.

5. BATTELLE RICHLAND (1976) "A Preliminary Assessment of

Accident Risks in a Conceptual High Level Wast Manage-

ment System", Battelle Richland, Washington.

6. BENETT R.G. (1975), "On the Analysis of Fault Tree"

IEEE Trans. on Rel. Vol. R-24-No.3 (Aug), pp 175-185.

7. BURDICK G.R. et al. (1976) "COMCAN - A Computer Program

for Common Cause Analysis" Aerojet Nuclear Company

ANCR - 1314 (May)


8. BURDICK G.R. et al. (1977) "Probabilistic Approach to

Advanced Reactor Design Optimization" Presented at the

Int. Conf. on Nuclear System Reliability Engineering and

Risk Assessment held in Gatlinburg TN June 20-24, SIAM

9. CALDAROLA L. (1977) "Unavailability and Failure Intensity

of Components"Nucl. Engn'g and Design 44 pp 147-162.


195

10. CASHWELL E.D. and EVERETT C.J. (1959), A Practical

Manual on the Monte-Carlo Method for Random Walk

Problems, Pergamon Press

11. CATE C.L. and FUSSELL J.B. (1977) "BAC FIRE - A Com-

puter Code for Common Cause Failure Analysis", Univer-

sity of Tennessee, Knoxville, TN (Aug).

12. CHATTERJEE P. (1974) "Fault Tree Analysis: Min Cut Set

Algorithm" ORC 721-2, Operation Research Center, Uni-

versity of California, Berkeley.

13. CONVEYOU R.R. and MACPHERSON R.D. (1967) "Fourier Analy-

sis of Uniform Random Number Generator" Journal of ACM,

14 pp 100-119.

14. COX D.R. (1963) Renewal Theory, John Wiley Sons, New

York.

15 CRAMER, H. (1946) Mathematical Methods of Statistics,

Princeton University Press.

16. DAVIS D.J. (1952) "An Analysis of Some Failure Data"


Journal of the American Statistical Association, Vol 47

No 258 (June).

17. DIXON W.J. and MASSEY JR., F.J. (1957) Introduction to

Statistical Analysis, 2nd Ed. McGraw Hill.


18. ELERATH J.G. and INGRAM G.E. (1979) "Treatment of Depen-

dencies in Reliability Analysis" Proceedings Annual Reli-

ability and Maintainability Symposium, pp 99-103


196

19. EDWARDS G.T. and WATSON I.A. (1979) "A Study of Common

Mode Failures" Report SRD-R-146, U.K. Atomic Energy

Establishment, Safety and Reliability Directorate,

(July).

20. EPRI-217-2-2 (1975) "Generalized Fault Tree Analysis for

Reactor Safety " Electric Power Research Inst. Palo Alto

California.

21. EPRI,NP-803 (1978) "WAM-CUT, A Computer Code for Fault

Tree Evaluation" Electric Power Research Inst. Palo Alto

California.

22. ERDMANN, R.C. et al. (1977) "A Method for Quantifying

Logic Models for Safety Analysis" Presented in Int.

Conf. on Nuclear System Reliability Engineering and

Risk Assessment held in Gatlinburg, TN June 20-24, SIAM.

23. EVANS R.A. (1974) "Fault Tree and Cause-Consequence

Chart" IEEE Trans. on Rel. Vol 23-page 1.

24. FELLER W. (1957) An Introduction to Probability Theory

and Its Applications, 2nd ed. John Wiley Sons New

York, Vol I.

25. FLEMING K.N. and RAABE P.H. (1978) "A Comparison of

Three Methods for the Quantitative Analysis of Common

Cause Failure" USDOE Report GA-A-14568, General Atomic

Corporation and Development.


197

26. FUSSELL J.B. (1973) "Fault Tree Analysis - Concepts and

Techniques" Proc. NATO Advanced Study Inst. on Generic

Techniques of Reliability Assessment, Liverpool (July).

27. FUSSELL J.B. et al. (1974) "MOCUS - A Computer Program

to Obtain Minimal Cut Sets from Fault Tree" ANCR-1156,

Aerojet Nuclear Company, Idaho Falls, Idaho.

28. FUSSELL J.B. et al. (1974b) "Fault Tree - A State of

Art Discussion" IEEE Trans. on Rel. Vol R-23 pp 51-55

(Apr).

29 GARIBBA S. et al. (1977) "Efficient Construction of

Minimal Cut Sets from Fault Tree" IEEE Trans. on Rel.

Vol R-26 No.2 (June) pp 88-94.

30 GARRICK B.J. (1970)"Principles of Unified Systems Safety

Analysis" Nucl. Eng'g and Design Vol 13 pp 245-321.

31. GOKECK 0. et al. (1979) "Markov Analysis of Nuclear

Plant Failure Dependencies" Proceedings Annual Rel.

and Maint. Sympos. pp 104-109.

32. GREEN A.E. and BOURNE A.J. (1972) Reliability Tech-

nology, Wiley - Interscience.


33. HAASL D.F. (1965) "Advanced Concepts in Fault Tree

Analysis" Presented at System Safety Symposium, June

8 - 9, Seattle Wash.
34. HAHN G.J. and SHAPIRO S.S. (1967) Statistical Models

in Engineering, John Wiley Sons New York.


198

35. HAMMERSLEY J.M. and HANDSCOMB D.C. (1964) Monte-Carlo

Methods, London Methuen Co. LTD.

36. HENLEY E.J. and KUMAMOTO H. (1978) "Top-Down Algorithm

for Obtaining Prime Implicant Sets of Non-Coherent Fault

Trees" IEEE Trans. on Rel. Vol R-27-No 4 (Oct) pp 242-

249.

37. HENLEY E.J. and WILLIAMS R.A. (1973) Graph Theory In

Modern Engineering, Academic Press New York.

38. HOWARD R.A. (1971) Dynamic Probablistic Systems Vol-I,

Vol II John Wiley Sons.

39. KARIMI R. and WOLF L. (1978) "BIT-FRANTIC - A Convenient

and Simple Code Package for Fault Tree Analysis and Un-

availability Calculations" Trans Am Nucl Soc, TANSO-30,

1 page 814.

40. KEMENY Y.G. and SNELL J.L. (1960) Finite Markov Chains

D. Van Nostrand Co.

41. LAMBERT H.E. (1975) "Measure of Importance of Events

and Cut Sets in Fault Tree" Presented in Reliability

and Fault Tree Analysis Conf. SIAIN Philadelphia pp 77-

100.

42. LEE Y.T. and APOSTOLAKIS G.E. (1976) "Probability In-

tervals for the Top Event Unavailability of Fault

Trees" Technical Report, UCLA-ENG-7663, School of En-


gineering and Applied Science, University of California

Los Angeles (June).


199

43. LEE Y.T. and SALEM S.L. (1977) "Probability Intervals

for The Reliability of Complex Systems Using Monte -

Carlo Simulation", UCLA-ENG-7758, School of Engineering

and Applied Science University of California, Los

Angeles, CA.

44. MACLAREN M.D. et al. (1964) "A Fast Procedure for Gener-

ating Exponential Random Number" Communication of ACM,7.

45. MANN N.R. et al. (1974) Methods for Statistical Analysis

of Reliability and Life Data, John Wiley Sons Inc.


46. MARSAGLIA G. and BRAY T.A. (1964a) "A Convenient Method

for Generating Normal Variables" SIAM Review 6.

47. MARSAGLIA G. et al. (1964b) "Fast Procedure for Gener-

ating Normal Random Variables" Communication of ACM7.

48. MASON, S.J. (1956) "Feedback Theory: Further Properties

of Signal Flow Graph" Proc. IRE 44 Page 920.

49. MATTHEWS S.D. (1977) "MOCARS: A Monte-Carlo Simulation

Code for Determining the Distribution and Simulation

Limits" TREE-1138 (July).

50. MCGRATH E.J. et al. (1975) "Techniques for Efficient

Monte-Carlo Simulations" ORNL-RSIC -38 Vol (2).

51. MODARRES M. (1979) "Reliability Analysis of Complex

Technical Systems Using The Fault Tree Modularization

Technique" Ph.D. Thesis, Nuclear Engineering Dept,

Massachusetts Institute of Technology.


200

52. Nuclear Engineering International (1979) "West German

Risk Report Echoes Rasmussen" News Review, Nuclear

Engineering International Vol 24 No 290 Page 4 (S-ep).

53. PANDE P.K. et al. (1975) "Computerized Fault Tree

Analysis, TREEL-MICSUP" Operational Research Center

University of California, Berkeley, ORC75-3.

54. PARZEN E. (1960) Modern Probability Theory and Its

Applications, John Wiley Sons Inc. New York.

55. RASMUSON D.M. et al. (1978) "COMCAN II - A Computer

Program for Common Cause Failure Analysis" TREE-1289.

EGSG Idaho, Inc. (Sep).

56. RASMUSON D.M. et al. (1979) "Common Cause Failure

Analysis Techniques; A Review and Comparative Evalua-

tion", Report TREE-1349.

57. RAU J.G. (1970) Optimization and Probability in System

Engineering, D. Van Nostrand Reinhold Co. N.Y.

58. RINGOT C. (1978) "French Safety Studies of Pressurized

Water Reactor" Nucl. Safety Vol 19 No 4, pp 411-427.

59. ROONEY J.J. and FUSSELL J.B. (1978) "BACKFIRE II - A

Computer Program for Common Cause Failure Analysis of

Complex Systems" University of Tennessee, Knoxville,

TN (Aug).

60. ROSENTAL A. (1975) "A Computer Scientist Looks at Fault

Tree Computation" Presented in the Conf. on Reliability

and Fault Tree Analysis.


201

61. SANDLER G.E. (1963) System Reliability Engineering

Prentice-Hall, Inc. Englewood Cliff N.J.

62. SEMANDERS S.N. (1971) "ELRAFT, A Computer Program for

Efficient Logic Reduction Analysis of Fault Tree" IEEE

Trans Nuclear Science Vol NS-18 pp 481-487.

63. SHELL D.L. (1959) "A High-Speed Sorting Procedure" Com-

munication of ACM Vol 12 No 7 pp 30-37.

64. SHOOMAN M.L. (1968) Probabilistic Reliability, An En-

gineering Approach, McGraw Hill.

65. TAKACS L. (1959) "On a Sojourn Time Problems in the

Theory of Stochastics Processes" Trans Amer Math Soc.

Vol 93 pp 531-540.

66. TAYLOR J.R. (1974) "Sequential Effects in Failure Mode

Analysis" RISO-M-1740 Danish, AEC Roskilde Denmark.

67. VAN SLYKE W.J. and GRIFFING D.E. (1975) "ALLCUTS, A Fast

Comprehensive Fault Tree Analysis Code" Atlantic Rich-

field Hanford Company, Richland Washington, ARH-ST-112

(July).
68. VESELY W.E. (1970) "A Time-Dependent Methodology for

Fault Tree Evaluation" Nuclear Eng'g and Design 13

pp 337-360.

69. VESELY W.E. and GOLDBERG F.F. (1977) "FRANTIC - A Com-

puter Code for Time Dependent Unavailability Analysis"

NUREG-0193, NRC report, Nuclear Regulatory Commission.


202

70. VESELY W.E. and NARUM R.R. (1970) "PREP and KITT, Com-

puter Codes for The Automatic Evaluation of a Fault

Tree" Idaho Nuclear Corporation, Idaho Falls, Idaho,

IN-1349.

71. WASH 1400, Reactor Safety Study, (1975), An Assessment

of Accident Risk in U.S. Commercial Nuclear Power

Plants" NUREG-75/014 Nuclear Regulatory Commission.

72. WILKES S. (1962) Mathematical Statistics,John Wiley

Sons Inc.

73. WOLF L (1975) "REBIT-A Computer Program for Fault Tree

Analysis" Unpublished Work, Department of Nuclear En-

gineering Massachusetts Institute of Technology.

74. WORREL R.B. (1975) "Using the Set Equation Transfor-

mation System (SETS) in Fault Tree Analysis", Relia-

bility and Fault Tree Analysis Conf. SIAM pp 165-185.

75. WORRELL R.B. and STACK D.W. (1977) "Common Cause

Analysis Using SETS", SAND77-1832 Sandia Labora-

tories, Albuquerque New Mexico.


11

I_

Ii
203

APPENDIX A

DETAILED MATHEMATICAL EXPRESSIONS OF


OF EQNS. (2.39) AND (3.1)
rrr.

ii-

i.

ri

ils

rC·

'9-

Lli-
204

APPENDIX A

DETAILED MATHEMATICAL EXPRESSIONS OF


EQNS. (2.39) AND (3.1)

A.1 Detailed Mathematical Form of the Eqn. (2.39)for Expo-


nential,2nd order Erlangian and Log-normal Repair Time
Density Function

A.l.1 Exponential Repair Time Density Function

If we assume that the repair time density function is:

g(t) = exp(-pt) , with constant (A.1)

then G(t)

and the Y term in Eqn. (2.39) will be:

t
G(t) = g(t) = 1- exp(-pt) (A.2)

F(T
T - Tc
Y = F(T 2 - T ) - exp(-pT) f(t-T) dT (A.3)

And, since we use exponential distribution for the failure

density function, i.e., f(t)=- exp(-Xt), Eqn. (A.3) can be

reduced to:

Y = 1 - I [exp(-X(T 2 - Tc) (A.4)

Substituting for Y from Eqn. (A.3) in Eqn. (2.39) and re-

adjusting the referrence time to Tc, the resultant equation


205

will read as:

Ro(t- T)= 1- exp[-%(t- Tc) ] + {exp[-p(t- Tc)]

exp[-A(t- Tc)] - exp[-Uf(t-TC)]]} (A.5)

Tc< t < T 2 - Tc

where
XPf
1- exp[-X (T2 -Tc)]- u-X exp[-X(T2-Tc)]
= Pf + (l-Pf)
1= (l-Pf) A exp[-X(T 2 -Tc)]

A.1.2 2nd Order Erlangian Repair Time Density Function

In this case the repair time density function is:

g M = v 2t exp(-t) (A.6)

and

G(t) = 1- exp(-pt) - t exp(-pt) (A.7)

Using Eqn. (A.7), the Y term will be:

Y = 1- exp[-X(T2 -Tc)] -
P -A (1+ P- ))exp (A.8)

Substituting for Y and G(.) from Eqns. (A.8) and (A.7) in


Eqn. (2.39) and readjusting the reference time to T c the
206

resultant equation will read:

2
RO (t-TC) = 1 - exp[-X(t-Tc)] + {(l+ -t) exp(-p(t-Tc))

-X (1 + X) [exp(- X(t-T )) - exp(-u(t-Tc)) (A.9)

Tce t< T 2 - Tc

where
XPf
1 - exp(-A(T 2 -Tc))- UX :(l+ -) exp(-X(T -Tc))
U-X 2
= Pf + (l-Pf)
1 + (-Pf) P-XP-
X (1 + 'X) exp(-X(T2 -Tc))

A.1.3 Log-normal Repair Time Density Function

In this case, g(t) and G(t) are:

2
g(t) = 1 exp[-½( nt- ) I (A.10)
ta2i-ra o

and

G(t) 1 exp(-½x 2 )dx (A.ll)


X ryf

where
nt - p
CT
207

using G(.), the Y term will read

in ffr-I

y J2 a 1 exp(-x )d) f(t-T)dT (A.12)


f2-
O 0

The above equation cannot be reduced further and it is

impossible to find a reduced expression for the Eqn. (2.34).

However, Eqn. (2.34) can be solved by successive numerical

integration, which has been performed to find the results


in the fourth column of Table 2.2.
208

A.2 Detailed Mathematical Form of the Eqn. (3.1) for Different


Repair Density Function

Equation (3.1) was produced by applying the Mason's rules

on a general three-state component shown in Fig. 3.9 and is:

- F(s)=
fll~s -- 1 (A.13)
s[l- P 4 F(s) P 1 - P3 F(s) P 2 T(s) G(s)

In this section one of the following repair distributions

will.be assumed to-represent the repair time density function,

g(t). They are:

1) Exponential

2) 2nd order Erlangian

3) Log-normal (Approximated by the combinations of


exponentials

We will also assume that the failure and test time distribu-

tions to be exponential with the constant parameter , v

respectively.

A.2.1 Exponential Repair Time Density Function

In this case, F(s), G(s), and T(s) are:

* A
F(s) = , s (A.14)

G(s) = +s
+ S (A.15)

*
T(s) = V (A.16)
209

Substituting for F(s), G(s) and T(s) from (A.14) to (A.16) in

Eqn. (A.13) and simplifying the result we will get:

(! + s) (v + s) (A.17)
~11(S) =
s[s + as+ b]

where

a = A+ v+ - PP 3A

b = Xv + uvp+ P 3Xp- PP 3 (v + )X

To transform Eqn (A.17) into t domain the method of

partial fraction will be used. To consider all the possible

situations in factorizing the denominator, the results are

divided into three distinct parts,

A) The case where a 2 - 4b> 0


In this case the denominator can be factorized to:

S 2 + as + b = (s+ s 1 ) ( s+ s2)

where
a /- a2 - 4b
s1,s2 = 2

and Eqn. (A.17) will become:

C1 C2 C3
011(s) = s
+_
s + + s52
1

where
PO
C1 = b
C2

C3
( -
-S1 (S 2 -
s

(p-s 2) ( - S2)
1 ) ( -
210

S1)
)

- s2(s 1 - 2)

Using the table of Inverse Laplace transform we will

get:

011(t) = C 1 + C 2 exp (-st) + C3 exp(-s2 t) (A.18)

The value of 11 (t) whe:n t becomes lare is equal to

C 1. Therefore:

) (
(O.)w =.v -
11) b

B) The case where a2 - 4b = 0

In this case s1 and s2 will be equal and by using the

routine discussed earlier, 11 (t) will read:

{ll(t) = C 1 + C 2 exp(-slt) + C 3 t exp (-s1 t) (A.19)

where

-s
C = b
1 b

p + v- 2s1 C3
2 = -S 1 s

( - Sl)- ( l )
I3 -S 1

s1 = =
a
1 2 2
21.1

C) The case where a 2 - 4b <0 or 4b- a 2 > 0

If the above situation is true then the Eqn.

(A.18) shall be factorized to:

C1 C2 s + C3
~ll(S) -s a 2+ 4b-a 2]
[(s + ) +4
Using the table of Inverse Laplace Transform we get

2
a C3 C2
a
aZ
l1 1 (t)= C 1 +e 2[C 2 cos kt + 3- 2 sinkt] (A.20)

where

C1 b

C2 1- C1

C3 (P+ v) - aC 1

k a=-4b

A.2.2 2nd Order Erlangian Repair Time Density Function

In this case, g(t) has the following form:

g(t) = t exp(-pt)

and its Laplace transform is:


2
G(s) = (I+s ) (A.21)

Substituting for Gts), Ts), and Fs) from Eqns. (A.21),


(A.16) and (A.14) in Eqn. (A.13) and simplifying the resultant
equation we get:
212

2
( + s) ( + s)
,,1(S) - (A.22)
s[s 3 + as 2 + bs+ c]

where

a = A+ v+ 2- P 3P 1 X

b = s(X +v)p+
) +2 + v- P 3P 1 (2 + v)

c = (v+ X) p2+ 2ivX- P 3P X 2


1 (2pv- p2) - P4Xp

To factorize the denominator we have to find its roots. One

method is to reduce the denominator to a manageable third

order equation. If we substitute x- for s, the denominator

will reduce to:

Den = s[x3 + px + q] (A.23)

where
a3
= - b- +

q = 2( )3- ab +
3 3

Then the roots of the bracket term will be

[x 1 ,x 2 ,x 3 ] = [D+ E, - l+2D /-- ]

where

D 3/ /q
213

3-
E 22

Q = ( P )3 ( q )2

Depending on the value of Q the roots will be changed from

the mixture of real and complex to three real roots. Hence

we have three distinct situations to evaluate the 1ll(t).

A) The case where Q> 0

This case will result in one real root and two com-

plex conjugate roots. By factorizing the Eqn. (A.23)

and taking into account that only one root of Eqn.

(A.23) should be positive, the Eqn. (A.22) will be

factorized to:

C1 C2 C 3 s +C 4
(A.24)
11(s) s +yy+
s+ (s+ )2 + w

where

Y = 3 (D+ E)

Z a + D+E
2-3 + 2
2 3 ( D- E 2
T3 2

Equating the right sides of Eqns. (A.24) and (A.22)

will result in the values of C 1 to C 4

2v
1_
C1 =e e
214

d 2 - y(d1 - y) - d3/y
C2
( zz )2 +w2 -y(Z- y)

C3 = 1 - (C 1 + C 2 )

C4 dl- yC 3 - ZC 2 - (Z + y)C 1

dl = 2p + v

d = ( + 2v)
2
dx = 1. v

Knowing the value of C 1 to c can be calcu-


4
lated from:

11 (t) = C1 + C 2 exp(-yt) + exp (- t)[C 3 cos(Iwlt)


Z
C4 -C 3 2
+ w
sin(IwlIt)] (A.25)

B) The case where Q= 0


This case will result in two real distinct roots.

When Q= 0, then D = E, Eqn. (A.22) can be written as:

C1 C2 C3 C4
+
= s+ a 2D (s+ a+ D)
3
(s + + D)2 3

The constants can be calculated from the following

equations:
2
C = Fb
215
a 2 a
(p- a3+ 2D) (p- + 2D)
C2 =

(2- a 2
3)(3D) a

1- a D)2 (v- a D)
C3 =
( a+ D)(3D)

- D)2 + 2(- a- D)(v- - D) ( + 4D)C 3


C4 +
( + D) (3D) ( + D) (3D)

Knowing C 1 to C 4, 11 (t) can be calculated from

ll 1 (t) = C1 + + 3 -2D) t) + C33x(C


C2z exp(-(3 t exp(- + ) t)
+ C4 exp(-( + D)t) (A.26)

C) The case where Q <0

This case will result in three different real roots.

Therefore, Eqrn. (A.22) can be written as:

C C2 + C3 C4
(s ) = s + s + s1 s+ s 2 S+s 3

where
a
s 1 = 3- 2 /-3 cos

s = + cos ( + )
2 3
2 /-'P3

S= a+
3 3 2 /3 cos(3
0 3r
216

O= cos-1 -q/2
_j
2

C
C-
(1-s 1) ( - SI)
C2 -Sl(S2 - Sl)(S3 - S

and finally the 4ll(t) ill read

11() C 1 + C 2 exp(-slt) + C 3 exp(-s 3t)

+ C 4 exp(-s4t) (A.27)

A.2.3 Log-normal Repair Time Density Function

Here, we proposed to replace the log-normal distribution

with a combination of exponentials. To do so, we approximate


the log-normal with the following equation:

g(t) = A[exp(-Xlt) - exp(-X 2 t)] (A.28)

It was found out that the above equation will result in a good

approximation for small tail log-normal, and a conservative

value for long tail log-normal distribution. The above


results can be seen in Table A.1 to A.3. To find the
217

Table A.1: COMPARISON OF CUMULATIVE PROBABILITY


DISTRIBUTION OF A LOG-NORMAL WITH
p= 3.0445 AND = 0.57 WITH ITS
APPROXIMATED COMBINATION OF EXPO-
NENTIALS WITH X = 6.319x 102 AND
=
21 l1-1
X2 1.1263x 10

Time Cumulative Value (%)


Hour
Log-normal Approx. Exponential

5 6.OE-01 6.68
10 9.68E 20.3

20 46.6 49.0
30 73.4 70.1
40 87.1 83.2

50 93.6 90.8
60 96.7 95.0
70 98.2 97.3

80 99.0 98.6
90 99.45 99.23
100 99.70 99.60
110 99.81 99.78
120 99.98 99.99
160 99.98 99.99
I
180 99.99 100.00
I
218

Table A.2: COMPARISON OF THE CUMULATIVE


PROBABILITY DISTRIBUTION OF
A LOG-NORMAL WITH p = 3.0445
AND a= 0.2, WITH ITS APPROX-
IMATED COMBINATION OF EXPO-
= -2
NENTIALS, WITH X1 8.1783x 10
AND X2 = 1.0873x 101
2

Time Cumulative Value (%)


Hour Log-normal Approx. Exponential

1 0.0 0.417

5 0.00 8.15

10 0.01 24.2

15 4.63 41.1

20 40.4 55.9

25 80.8 67.8

30 96.27 76.93

35 99.46 83.70

40 99.94 88.6

45 99.99 91.8

50 100.00 94.56
219

Table A.3: COMPARISON OF THE CUMULATIVE PROBABILITY


DISTRIBUTION OF A LOG-NORMAL, WITH
p= 4.60517 and a= 0.66785, WITH ITS
APPROXIMATED COMBINATION OF EXPONEN-
-2
TIALS, WITH Al = 1.0788x 10 AND
3.095x 10-2
k2= 3.0955x 10

I
Tim e Cumulative Value (%)
Hou r :
Log-normal Approx. Exponential
~~~~. .-..

5 0.00 0.3896

10 0.03 1.456

20 0.82 5.1

40 8.5 15.8

60 22.21 28.0

80 36.92 39.74

10.0 50.00 50.20

120 60.76 59.24

160 75.90 93.10

200 85.01 82.40

300 95.00 93.97

400 98.90 98.93

500 99.20 99.30


600 99.63 99.76

700 99.82 99.92


800 99.91 99.97
900 99.95 99.99
.- . ..
220

parameters used in Eqn. (A.28), the following properties of

the two distributions (log-normal and Eqn. (A.28)) will be

equated:

1) Mean values

2) Median values

To clarify the method, we present the following example.

Let us assume that we want to find the approximate distribu-

tion of a log-normal (i.e., to find the parameters of Eqn.

(A.28)) with the following properties:

·p= 3.0445

a= 0.57

First step is to find the parameter A. The parameter A can be

found by normalizing Eqn. (A.28), i.e., JO g(t)= 1, and

is:

1 2
A X -A (A.29)
2 1

The above normalization is necessary because for g(t) to be

a probability density function its area over all possible

values of t should be equal to one.

The mean value of Eqn. (A.28) is given by:

' f0rX %2 + X1
1=ot g(t) (A.30)

The median value of the Eqn. (A.28) is a time where the

following equation is true:


221

tm
g(t) = 0.5 (A.31)
0

Now if we replace for p1 and tm from the properties of the

log-normal distribution we will have:

tm e
m1 -
= e' - (A.32)

By substituting 1 and tm from Eqn. (A.32) in Eqns. (A.30 and

(A.31) and simplifying the resultant equations, we will get

the following correlation:

a
x exp(- x ) - exp(-ax) =0.5(x - 1)exp(a) (A.33)

where
tm
a

x
X2
= 1.
I

Equation (A.33) can only be solved by numerical itera-

tion. To do so, we simplify the above equation further. We

can expand the first exponential term to its third or fifth

terms of its series, i.e.:

aa+ aa2 1 a3
exp(- .) 1-a 2 ~- -T-
3 ()
+ ( a) +... (A.33a)
A3a
4! x
222

By using the properties of log-normal and the first three

terms of the Eqn. (A.33a), Eqn. (A.33) can be put into the

following form:

x1 [ -ax + + 0.5e a - a] (A.34)


0.5ea - 1

where

a - 0.85

One method of solving such an equation as Eqn. (A.34) is to

assume a number for, say x0 , and find the RMS value of the

equation and repeat this action twice. Then find the new

adjusted value for x 0 from the following equation (See

Appendix A.3):

1 (X2 - Xl)22
x1 X+
x2 2X -x0
2x1 -x - x (A.35)
2

If we repeat the above procedure long enough, the value of

x 0 will approach the actual root value of the equation. The

routine discussed above was used to find the value of x in

Eqn. (A.34). It was found out that x has a value of 1.782471

and A1 and 2 were calculated to be:

X 2 = 1.12634 x 10

1 = 6.31899x 10 2

To check the accuracy of the above results, the cumula-

tive distribution value of g(t) for the median point of


223

log-normal distribution was calculated from the following

equation:

Pr(t< e) = f g(t) = 0.485

The above result shows that the matched distribution is a

good approximation to the original distribution. However,

if the first five terms of the Eqn. (A.33a) were being used

(instead of the first three terms), the resultant probability

at t= ell would be 0.4928. HIere, the difference is less than


one percent with the actual probability of the original

distribution, which again shows the consistency between the


distributions.

Having found the X1 and X 2 parameters, one can easily

find the 11 (t) from the previous repair distribution (2nd


order Erlangian). Because the Laplace transform of g(t) is:

1 2
G(s) (A1 s)(X2 s) (A.36)

and can be replaced for Eqn. (A.21) by considering different

values for . Hence, the parameters a, b and c in the Eqn.

(A.22) for matched log-normal will read:

a = + + A1 + A 2 - P 3 P 1 A

b = (X + )(X 1 + A2 ) + + vX - P 3 PlX(Xl+ 2 + v)
1X2

= (V + A) (X1 + (A1 +
c 2) 2 )VA - P 3 P1 ( 1v + A2 v + X1 A 2 )
- P4 A1A2 (A.37)
224

And the procedure to find ll(t) will be the same as the

one which was discussed in the previous section (whereever we

have p2 and 2 we will replace them by X1X 2 and (X1 + X2 )

respectively).
225

A.3 Solution of Equations of the Form x = f(x)

In certain cases when the equation can be formed as

x= f(x), where f is a known function of x, there is a partic-

ularly efficient way of proceeding to the solution.

Let, x = i be the root of the above equation; i.e.,

= f(5) (A.38)

Let, x be an initial trial value for the root § and

define xl= f(xo). And suppose that x is related to exact

root by:

x0 = + (A.39)

Then,

xl f(xo) = f( +e)+f( )+ f (+)+0(2)

or

2
xl = + f () + O(CE
) (A.40)

Now, let x denote the next initial value to use, and define

it as a linear combination of the first initial trial value

x o and derived value xl.

x0o Ax + Bx1 (A.41)

However, by substituting for x and x1 from Eqns. (A.39) and


(A.40) in Eqn. (A.41) we get:
Xo=A( +) +B( +ef =((()+O(E

x0 =
+ s+(
+O

(A+B) + (A+Bf ())+ O( 2)


2xo
2
226

))

(A.42)

Since we wish x to be as close to the root as possible, we


set

A + B = 1 (A.43)

and

A + Bf ( ) = 0 (A.44)

Solving Eqns. (A.43) and (A.44) for A and B:

1
B = t1
1- f ()
A =1 -B (A.45

The derivative f () is not known, however, but it can be

approximated by, say, f (Xo). To avoid differentiating f(x),

we can define x 2 = f(xl), then in turn approximate f (Xo) by

2 x- x1
x -x o

Using this expression together with Eqn. (A.45) in Eqn. (A.41),

we will obtain,

2
, x 1 - x X2 (xx1 - 2)
(A.46)
o 2x1 - x - x2 = x 2 +2x - x ° x2
227

Thus, x can be used as a new initial trial value and the

process repeated. Convergence is usually very rapid, since

the error for each successive trial value is O(ec), when the

error for the previous trial value is . However, if f () = 1,

the process if it converges at all, converges more slowly,

since from Eqn. (A.44) we cannot then make both A+ B= 1 and

error of order vanish also.*

This result is known as Aitken's 62 process (See F.B.


Hildebrand, Introduction to Numerical Analysis, New York:
McGraw Hill C. (1956), p.445).
228

APPENDIX B

SOME BASIC BACKGROUND AND RELATED


EXAMPLES USED IN SECTION 3.1
II

I
229

APPENDIX B

SOME BASIC BACKGROUND AND REIATED


EXAMPLES USED IN SECTION 3.1

B.1 Mathematical Background for Boolean Operation

Let A, B, C denote sets or collections of sample points

in the sample space S. Associated with the event "A occurs"

in the event "A does not occur" which is denoted by X, called

complement of event A, i.e., the sample points of S not

contained in A. To symbolize "x (any point) is contained in

A," we use x A. However, when we say "the event A is con-

tained in the event B," we use ACB, and we mean "the occur-

rence of event A implies the occurrence of event B."

There are two fundamental operations that have had wide

application in fault tree analysis; they are union (U) or OR


and intersection () or AND operations. The event AUB is

calledtheunion of A and B, or the event "either A or B (or


both) occur." This is equivalent to saying that if xe A or
x e B (or both), then x AUB, or in closed forni we can write:

AUB= {x| xeA or xeB} (B.1)

The event AnB is called the intersection of A and B, or


the event "both A and B occur." And in closed form is

AfB = {xJ xeA and x B


230

Two sets are disjointed or mutually exclusive if they have no

common elements, thus the intersection of two disjoint sets

is the null set. This will be written symbolically as

AtB = +

The operation between sets or events can be represented

using a venn diagram which displays a subset of S as in

Fig. B.l. Figure B.1 shows four disjoint sets that form S.

And there exist 16 unique subsets which can be produced by

all the possible union combinations of the sets 1, 2, 3 and 4.

Table B.1 shows the operation between the two sets A and B

and summarizes some of the Boolean operation laws. Figure B.1

can be used to check some of the operations mentioned in

Table B.1.

Fig. B.l: A general Venn diagram,


231

TABLE B.I: LIST OF BOOLEAN OPERATIONS AND IDENTITIES

Property or Operation Statement of the Operation

1. (A) = A The complement of the comple-


ment of A is A itself (the
Involution Law),
2. AA = A The intersection/union of A
AUA - A itself is A (the Idempotent
Law).

3. AnB = BOA The cumulative law for


AUB = BUA (intersection/union) of events.

4. AU (AAB) = A The absorption law.


A(l(BUA) = A

5. AnB = AUB The dualization law or


AUB = AnB (deMorgan's Law).

6. AA = The complementary relations.


AUA = S The first states that an
event and its complement can-
not occur. The second states
that an event or its comple-
ment must occur.

7. A(BUC) = The distributive law of


(AAB)U(A)C) intersection with respect to
union.

8. AU(BC) = The distributive law of union


(AUB)CA(AUC) with respect to intersection.

9. (AB) U(A%) Exclusive OR,


(A%") U (AB) coincidence.

1 _ _
232

B.2 Application of UNRAC to the Two Fault Tree Examples,


for Comparison with MODCUT and WAM-CUT

In this Appendix, the results of the minimal cut sets

generated by BIT are given. Figure B.2 shows a fault tree

diagram which is used in Modarres (1979). The corresponding

minimal cut sets are shown in Table B.2. The component's

numbers in Table B.2 are those assigned in the Figure B.2.

Fig. B.3 shows an example fault tree which is used to

compare the results of BIT with WAM-CUT (EPRI (1978)). The

fault tree is numbered according to the input descriptions to

UNRAC. As can be. seen, the combination gate is replaced by

four gates. Before inputing the tree, gate number 24 which

is a NOR gate should be transformed according to Figure 3.3

to AND gate with complemented input events. Table B.3 shows

all the minimal cut sets. In comparing to the results given

in EPRI(1978), one should note that Table B.3 gives all the

minimal cut sets whereas EPRI(1978) reported those cut sets

with probability value of 10-6 or more. Therefore, in Table

B.3, five more minimal cut sets are generated.


233
234

Table B.2: LIST OF MINIMAL CUT SETS GENERATED


BY BIT FOR FIG. B.2

CUT SET NO. NO. OF CO'AP. IN C. S. COMPONENTS NOS.

1 3 1 2 3
2 3 1 2 4
3 4 1 2 5 16
4 4 1 2 13 17
5 4 1 2 15 19
6 4 1 2 8 20
7 4 1 2 10 21
8 4 1 2 18 23
9 4 1 2 18 25
10 4 1 2 10 26
11 4 1 2 22 28
12 4 1 2 22 24
13 4 1 2 22 23
14 4 1 2 22 29
15 4 1 2 22 30
16 4 1 2 35 36-
17 4 1 2 22 27
18 5 1 2 18 33 34
19 5 1 2 22 31 32
20 5 1 2 12 13 35
21 5 1 2 7 8 36
22 5 1 2 9 10 36
23 5 1 2 14 15 35
24 5 1 2 5 6 36
25 5 1 2 11 23 37
26 5 1 2 11 28 36
27 5 1 2 11 24 36
28 5 2 11 23 36
29 5 1 2 11 29 36
30 5 1 2 11 30 36
31 5 1 2 23 35 37
32 5 1 2 25 35 37
33 5 1 2 26 35 37
34 5 1 2 11 27 36
35 6 1 2 11 12 13 28
36 6 1 2 11 12 13 24
37 6 1 2 11 12 13 23
38 6 1 2 11 12 13 29
39 6 t 2 1 112 13 30
40 6 1 2 33 34 35 37
41 6 1 2 5 6 23 37
42 6 1 2 5 6 25 37
43 6 1 2 5 6 26 37
44 6 1 2 5 6 12 13
45 6 1 2 7 8 23 37
46 6 1 2 7 8 25 37
47 6 1 2 7 8 26 37
48 6 1 2 9 10 23 37
49 6 1 2 9 10 25 37
50 6 1 2 9 10 26 37
51 6 1 2 11 14 15 28
52 6 1 2 11 14 15 24
53 6 1 2 11 14 15 23
54 6 1 2 11 14 15 29
55 6 1 2 11 14 15 30
56 6 1 2 5 6 14 15
235

Table B.2 Continued

CUT SET NO. NO. OF COMP. IN C. S COMPONENTS NOS.

57 6 1 2 11 24 26 37
58 6 1 2 7 8 14 15
59 6 1 2 7 8 12 13
60 6 1 2 9 10 14 15'
61 6 1 2 11 12 13 27
62 6 1 2 9 10 12 13
63 6 1 2-11 14 15 27
64 6 1 211 24 25 37
65 6 1 2 11 25 28 37
66 6 i 2 11 26 28 37
67 6 t 2 11 25 29 37
68 6 1 211 25 30 37
69 6 1 2 11 26 29 37
70 6 1 2 11 26 30 37
71 6 1 2 11 31 32 36
72 6 1 2 11 25 27 37
73 6 1 2 11 26 27 37
74 7 1 2 11 26 31 32 37
75 7 1 2 1t 29 33 34 37
76 7 1 2 11 30 33 34 37
77 7 1 2 11 28 33 34 37
78 7 1 2 If 27 33 34 37
79 7 1 2 7 8 33 34 37
80 7 1 211 24 33 34 37
81 7 1 2 5 6 33 34 37
82 7 1 2 9 10 33 34 37
83 7 1 2 11 12 13 31 32
84 7 1 2 11 25 31 32 37
85 7 1 2 11 14 15 31 32
86 8 1 2 11 31 32 33 34 37
236

or

H-

p4

1 .
0
I

CC
0 .
a4J I
1 .4

/
I i
9 I
a0

fl) H
I 0

I
if
43
R- 0"
/ I
\ !

. Do
I
.40
W '

04
to
-H
· r4
237

Table B.3: LIST OF MINIMAL CUT SETS GENERATED


BY BIT FOR FIG. B.3

CUT SET NO. NO. OF COUP. IN C. S. COMPONENTS NOS.


1 2 1 4
2 .1 11
3 2 1 8
4 3 4 8 9
5 2 2 4
6 2 3 4
7 3 1 -3 6
8 2 5
9 3 2 8 9
10 3
4
38 9
11 1 -3 -6 10
t2 2 1 7
13 4 -3 6 b 9
14
15
3 589
4 -3 4 -8 10
16 2 4 7
17 2 5 7
18 3 2 -3 6
19 2 2 5
20 4 -3 5 -6 10
21 2 3 5
22 4 2 -3 -6 10
23 2 2 7
24 3 -3 6 7
25' 2 3 7
238

B.3 On the Evaluation of the Equivalent Component's


Parameters

In Section 3.2.2, it was mentioned that to decrease CPU

time for the cut set generation and improve the efficiency of

the code it is advisable to combine some of the non-repetitive

components and represent them as a super event or equivalent


component when the top event unavailability computation is

the desired goal. The above procedure is applicable to

components with constant unavailablity and those which can be

repaired upon the inception of failure (monitored components).

The equivalent component parameters are highly dependent

on the type of the gate that the combined components will be

input to. Here, we will only consider the OR and AND gate.

The equations that will be summarized in this appendix

are taken from a report published by Ross (1975). In that

report the asymtotic expected down time and up time of a

series and a parallel system of repairable components was

analyzed. Table B.4 shows the final equations for equiva-

lent failure rate, X, and repair rate, , of an AND and OR

gate. This table was produced by some manipulation of the

results given in Ross (1975). In another study, Modarres


(1979), an approximated set of equations to evaluate the

equivalent X and p was reported. For proof the readers will


be referred to the above references.

#.E
239

TABLE B.4: EQUIVALENT FAILURE AND REPAIR RATE OF AN


AND/OR GATE, FOR MONITORED COMPONENTS AND
ELPURENTS WITH CONSTANT UNAVAILABILITY

Gate Type Equivalent Parameter Gate Output


for Constant
A p Unavailability

AND
1-

iEX E iEXi i iE
OR

ix Pi+Xi

x is a set of n number of components input to the gate.

Ai is the constant unavailability of component i.

Xi,Pi are the failure and repair rates of the component i.


240

B.4 References

1. EPRI, NP-803 (1978) "WAM-CUT, A Computer Code for Fault Tree

Evaluation" Electric Power Research Inst. Palo Alto California.

2. MODARRES M. (1979) "Reliability Analysis of Complex Technical

Systems Using The Fault Tree Modularization Technique" Ph.D.

Thesis, Nuclear Engineering Dept., Massachusetts Institute of

Technology.
3. ROSS S.M. (1975) "On The Calculation of Asymptotic System

Reliability Characteristics", Presented at the Conf on

Reliability and Fault Tree Analysis, SIAM 1975 pp 331 - 350.


PAGES (S) MISSING FROM ORIGINAL
242

APPENDIX C

ON THE CODE STRUCTURE AND INPUT DESCRIPTION TO UNRAC

C.1 Code Structure

The UNRAC consists of three parts. The first part is the cut set

generator which reads in the fault tree description and generates the

minimal cut sets. The second part is the unreliability and/or unavail-

ability evaluator which reads in the component's data, i.e., failure rate,

test interval, average test time, average repair times etc., and calculate

the top event unavailability and importance of each component involved.

Finally, the third part is the Monte-Carlo simulator which reads in the

error factor and distribution of each component's failure characteristics,

i.e., failure rate, average test time, etc., and simulates the top event

distribution to find the error bound on the top event unavailability.

Figure C.1 shows the structure of UNRAC discussed above and its related

subroutines. The function of each subroutine is described as follows:

1-FLOGIC Reads in the fault tree, generates the cut: sets, and ranks
(optional) the minimal cut sets in order of their size. It
calls 3 subroutines; a) MCSN1, b) MCSN2, and c) DECOD. MCSN1
and MCSN2 are used to discard the super sets and duplicated
sets and DECOD is used to transfonr the cut set words into
the individual components in each cut set.

2-C(MDAT Reads in the component's data, checks for any misarrangements


in input sequence and prints out the component's input param-
eters.

3-QCAUN Evaluates the average unavailability of each component over


a period of one year. It calls two subroutines; a) MONTOR
and b) EXAGAM. IMNTOR, first sets up proper efficients for
the monitored components, and then calls SOLN to find the
243

MCSN1
- UT SET
C (BIT)-FLOGIC - MCSN2
GENERATOR - DECOD

f.rnRn AMT _N rrnn CeT M


I,.A.lvl~J

- QCAUN EXAGAM

- TIMES rTEXGAM-QUAV
- CIMPOR QCPONT-SOLNT

UNAVAIL- LSYCOm TEXGAM-QUNAV


- ABILITY -
EVALUATOR

Ls~sccxm

- AVERAG

- QPRINT

- QPLOT - GRAPII - ENDPLT

SYSCC( RAND
- MONTE- CARLO
- MCSIML XVART - ANRAND
SIMULATOR
GAMARN

EXPRN

Fig. C.1 : The general routines used in UNRAC


244

coefficients of the monitored component's unavailability

equations and evaluate the average unavailability of each

monitored component over a period of one year. EXAGAM is used

when the repair distribution of a periodically tested component

is exponential and/or special gamma(2nd order erlangian)


at the end QCAUN prints the average unavailabilities.

4-TIMES generates all the time points required for the time dependent

usedin the following routines.


analysis
5-CIMPOR Computes the importance of each component based on, either

routine or any
the average values calculated in QCAUN
which
specified time during the operations it calls QCPONT
will be discussed in the following routine.

6-QCOMP Calculates the top event unavailabilities of the system based

on time points generated by TIMESroutine. It calls QCPONT


and SYSCh. QCPONT calculates the time dependent unavail-

evaluates the top event


abilityof each component and SYSCCM
unavailability evaluates the
of each component and SYSCOM
top even unavailability given components unavailabilities.

calls SOLNTand TXGAMdepending upon the type of


QCPONT
componentunder study. If it is a monitored'SLNT will be
called if it is a periodically tested componentTEXGAM
will be called.

7-AVERAG Finds the average top event unavailabilty.


8-QPRINT Prints the top events unavailabilities at peaksand detailed
time intervals.
time dependent for the requested
245

9-QPLOT Plots the top event unavailabilities for the requested type
and intervals.

10-MCSIM Simulates the top event distributions. It callsXVART and

SYSCOM. XVART first simulates each component's failure

characteristics according to its distribution by bnte-Carlo

sampling. Then finds the each component's unavailability.

XVART calls several random number generators i.e., RAND

(unifomn), GARAN (Gamma), ANRAND


(normal) and EXPRN
(exponential).
246

C.2 On the RandomNuifber Generator and Sorting Routine Used in the


'bnte-Carlo Sl'at MCSIM

MCSIM is able to simulate different distributions for the component's

failure characteristics. In general, all of the distributions in use

today can be simulated by using a combination of uniform, exponential


and normal random number generators, see McGrath, et al (1975). For
uniform randomnumbergenerator, the general residual value routine for
IBM was employed (cf. Section 3.4.2). To generate the exponential and

normal variates the routines of Maclaren, et al (1964) and Marsaglia and


Bray (1964) were used. Figures C.2 and C.3 show the flow charts of
exponential and normal random numnbergenerator.

To rank the top event unavailabilities in order of their magnetude,


a sorting routine which was used in LIMITS
was incorporated in the MCSIM.
Figure C.4 shows the flow chart of the aforementionedroutine.
247

Fig. C.2: Exponential distribution random number


generator flow diagram
248

0
CO
co

-H
10

0
Hl
t"
249

PROB = P(I)
P(I) - P(I+M)
P I+M) PROB

No Yes

Note: NRAND = Number of random numbers to be sorted


P (I) = Numbers to be sorted

'Fig. C.4: Flow chart of sorting routine used in MICSIM.


250

C.3 INPUTDescription of UNRAC


The UNRAC
is written in such a way that it can be used to runone or
more cases by inputing the proper sets of data. Each data set of input
is called "Data Group". Each data group begins with a keyword card which
identifies the data and one or more additional cards containing proper
information for that data group. A case consists of 3 data groups:
1) Fault tree logic

2) Component data.

3) Output option.
However, there are a total of 10 data groups needed to run a complete job
(demandingall the options encoded ). The 10 data groups are described
below.

. Data Group #1 -- TITLE


This data group specifies the title for the case to be run. It con-

sists of two-card types. The first card type is a keywordcard containing


the characteristics "TITL" in the first 4 columns. The second card type
contains 80 'characters of text to be used for the output heading. The
corresponding variables name and required formats for this data group
are:
Card Variable
Type # Columns Name Format Description
1 1 - 4 ANAE (A 4) Keyword'TITL"

2 1 - 80 TITLE 1 (20 A 4) Title for output


251

2 Data Group #2 -- LOGIC (Fault tree logic)

This data group describes the logic of fault tree to be analyzed. It

is identified by a keyword card beginning with the characters "LOGIC".

To input the fault tree logic, all the basic events and gates should be

numbered in order to minimize the storage requirement. Hence, the follow-

ing procedures should be employed.

1) First assign a number to each basic event (start from 1). For

complement events use the negative number of the corresponding events,

(i.e., if there exist both A and A in a fault tree and the number N

has already been assigned to A then -N should be assigned to A).

2) Then assign a number to the top gate, and

3) Continue until all the gates are being numbered.

The variable names and the required formats for this group are

Card Variable
Type # Column Name Format Description

1 1 - 4 ANAME (A 4) Keyword "LOGI"

2 1 - 3 IMAX (I1, Total number of components


in the fault tree
4-6 IMAXT I3, Total number of components
plus gates in the fault tree
7-9 LMAX 13 Maximum No. of input to a
gate plus two. Presently
LMAX Limit is 20.
10 - 12 NSORT I3, A control card for minimal
eut sets printout,
If NSORT=1 no need for sort-
ing the min. cut sets.
If NSORT=0 order the min.
cut sets according to their
size.
13 - 15 NMAX I3) Maximum allowable components
per cut set.
3 1- L1(I,J) (2413) Gate Information card(s)
Each card starts with the
252

gate number assigned in the


fault tree,gate type (1 for
AND0 for OR) and gate input
in any order (components
and/or gates).

NOTE: The total number of card type #3 should be equal to IMAXT-IMAX,

and the cards should be arranged in a sequential order of the gate


numbers starting from the top gate.

3 Data Group #3 -- CCMPONENTS

This data group describes the components which make up the system
to be evaluated. It is identified by a keywordcard beginning with the
characters "COMP". This card is followed by an option card containing

the characters "NEW or "UPDATED". The characters "NEW" indicates that

the components to be input are to become the effective component set

for the case replacing previously input components (i:f any). "UPDATED"
indicates that only the non-blank components parameters are to be used

in updating corresponding parameters for previously input components.


card, one card must be entered for each
After the '"NEI' or "UPDATED"
component. This card contains the component number, component name, and

11 parameters describing the reliability data for the component. Zero

may be left blank.


entries
Under the "NEW' option the component number should be sequential,

starting with one. Under the '"UPDATED" option, the component numbers are

used as keys to identify components to be updated and non-blank fields on

the following component cards replace the old values for the correspond-

ing parameters. A negative number must be used to zero out a parameter.

The 11 parameters on the component cards are:


253

Symbol Name Description

A LAMDA Failure rate (hr" 1) x 106


T2' TEST2 Periodic test interval (days)
T1 TEST1 First test interval (days)
Tc TAU Average test period (hours)
TR REPAIR Average repair time or mean-time-to-repair (hours)
qo QAVRD Test override unavailability
Pf PTCF Probability of test - caused failure
e INEFF Detection inefficiency

Au 6
ULAMDA Undetected failure rate (hr )x10
qd QRESID Constant unavailability per demand
DIST Repair distribution

If 0 is input as a non-zero value, the program will recompute X as

follows: A1 = A(l-e).

If is input and Au is left blank, the program will compute Au as


Xu = X.

The 11 parameters described above allow the user to specify mst

types of componentsunder a variety of testing schemes:


-- for non-maintained components. The user must provide a non-

zero value for u and/or qd and the rest of the parameters should
be left blank.
-- for periodically tested components. The user must input a non-
zero value for x and T2 and optional values for T1, T TR, q,
pf , e, u qd and repair distribution type.
-- for monitored components and TR are essential input and T1 and
T2 should be zero or left blank.If a three state component
model is being used then a non-zero value for qo and pf is required.

A non-zero value for qo refers to the probability that the

component does not have any indication for its failure [i.e.,
254

P 3 =qo in equation (3.1)]. And a non-zero value for pf indicates

the probability that the failure is a spurious one (i.e., pl=pf

in equation (3;1).

To specify the repair distribution, the following initials should be

used:

1) E for exponential
2) u for uniform and/or constant
3) G, for gammaand/or 2nd order erlangian
4) M, for lognormal approximated by the combination of exponential.
The data requirement for the case of 1 to 3 are self-explanatory, i.e.,
TR is sufficient to express the distribution. For the fourth distribution

two parameters are needed see equation (3.8)]o. They are:

TR l/l
'u-X'2
Finally, the last card of the components data should contain "-1" in the

component numbers field to end the data group. The CMPONEflSdata


group variable names and their formats are shown below:
255

Card Variable
Type # Colum name Format Description

1 1- 4 ANAME (A 4) Keyword"CIP"
2 1- 4 TYPE (A 4) Option "NEW"or WJPDTrt'
3 1- 5 INDX (I 5, Component number
6-13 NAME A 8, Component name
14-19 LAA F 6.0, Failure rate x 10 6 /hr.
20-25 TEST2 F 6.0, Test interval (days)
26-31 TEST]. F 6.02 First test nterval (days)
32-37 TAU F 6.0, Average test time (ur)
38-43 REPAIR F 6.0, Average repair time (hour)
44-49 QOVRD F 6.0, Override unavailability
50-55 PTLF F 6.0, Test-caused failure
56-61 IWEFF F 6.0, Detect ion inefficiency
62-67 ULAMDA F 6.0, Undetected failure ratex106/hr
68-77 QRESID E10. 3, Constant tinavailability
IX,
79 DIST Al) Repair distribution
NOTE: There should be a total of IMAX (total number of the components)

of card type 3. Otherwise the unavailability calculations based.


on the fault tree input will not be correct. To decrease the time
interval between two output results, it is recommendedto use a
dummycomponent card. The number for dummycomponent should be

IMAX+ 1 and the name should be "DJ!LvY"punched in columns 6-10.


The value for should be zero and the value for T2 should be the
desired interval, say 2 days. After the dummythe termination
card should be used, i.e., Code "-1" in columns 4 and 5 of the last
card.
256

4 Data Group #4 - -TIME (Optional)

This data group specifies the time period over which component and

system unavailability are to be computed. It consists of a keyword card

beginning with the characters '"TIME" followed by a single card containing

the total time (in days) over which the time dependent, instantaneous

unavailability is to be computed.

The number of time points generated by the code with:inthe time period

isa function of the test intervals, testing times, andlrepair tines of


the components. A pair of points is generated where-ever a change in the

slope of any component unavailability function occurs.

If the data groups(inCluding the keyword card) is omitted or if a

zero is entered for the "time period",


the default value of 365 days will
be considered by the code. The timedatagroup variable names and their
formats are shown below

Card Variable
Type Column Name Format Description

1 1- 4 ANAIE (A 4) Keyword "TIME."

2 1-10 TEND (F10.0) Total time period (days)

5 Data Group #5 -- IMPORTANCE


(Optional)
This data group is used to request the importance evaluation of a
specified set of componentsor all of the componentstowards the top
event based on either the average unavailability or instantaneous unavail-
ability of the components in the system. The data set is identifiedby a

keyword card beginning with the characters "IMPO". This card is followed

by an option card containing the characters "ALL" or "SPECIFIC". "ALL" is

used if the calculation of importance of all of the components are desired.


257

"SPECIFIC' option is used when one or more components of the system have
either failed or becomeinactive in one wayor another and it is desired
to evaluate the importanceof the rest of the componentsinvolved.
The third card in this data group should contain the time that the
importance calculations must be carried out. This time should be a non-
zero value for the "SPECIFIC'case. However, for the "ALL"case the time
card can be a blank card. If the blank card is used, the importance
calculation will be carried out based on the average unavailability of
each componentevaluated over a period of one year. For "ALL"case, the
time card is the last card in this data set.
In the case of "SPECIFIC'option a total of 5 cards are needed. The
fourth card should contain the specified componentnumberwhich will be
assumed to be in failed state or under the test. Finally, the fifth card

should contain a list of dependent or inactive components. The IMPORTANCE


data group variable names and their formats are shown in the following

table:
Card Variable
Type # "Columns Name Format Description
1 1- 4 ANAE (A 4) Keyword "IMPO"
2 1- 4 TYPE (A 4) Optional "ALL" or "SPEC"
3 1-10 TIMPOT (E10.3) Time of importance analysis(hr)
IF TYPE = ALL, Then GO TO card data group 6, Otherwise:
4 1- 3 IM(X4PQ (13, Specific component's
numberto be out of service
4- 6 IDEP I3) Total NO. of dependent
componentto be read in
next card.
5 1- IRELCM(K) (20 I 3) Component's numbers depend-
Ks=, DEP dnt on IMCcMP
258

6 Data Group #6 -- 'PRINT (Optional)

This data set is used to request a table printout of the system


unavailabilities computed by the program over one or more time intervals

(within the input time period) and to specify the number of instaneous

unavailabilities to be separately ranked. The data group is identified

by a keyword card beginning with the characters "PRIN". The keyword card

must be followed by one card containing the number of time intervals

desired and the number of maximum unavailabilities desired. The value

input for the number of intervals may be -1, 0, 1, 2, 3, or 4.

If the value input is negative, all. system urlavailabilities computed

are printed,no additional cards are necessary. If the value is zero,

any print options previously specified are nullified the default


tlid
option (no print) is instituted no addition cardsarenecessary.If the
value is greater than zero, another card containing the end points of the
is read, in this case the program will print the
intervals
system unavailability at all the computed time points that .fallwithin

the specified interval(s) including the end points. A maxinmumn


of four
intervals may be specified.
The maximnan
unavailability output lists, in decreasing order, the n
greatest instantaneous unavailabilities computed by the program. The

number of unavailabilities printed (n) has a default value of 12 and may

not exceed 100. If the PRINT data set (including the keyword card) is

omitted, 12 peaks are printed, and no other system unavailability print-

out is produced.

The PRINT data group variables name and the required formats are:
259

Card Variable
Type # Columns Name Forma Description

1 1- 4 ANAME (A 4) Keyword "PRIN"


2 1- 5 NPRINT (I 5, NO. of time intervals
for printing system un-
availabilities (-1,0, or
1-4).

6-10 NPEAK I 5) NO. of peaks to be printed,


default is 12.
IF NPRINT < 0 GO TO next data group, otheri.se:
3 1-80 (STPPT(I), Start of Ith interval
(Days)
FINPRT(2), (8F10 .D) Endof Ith interval
(Days)
I=l, 4) Max. of 4 intervals

7 Data Group #7 -- PLOT (Optional)

This data set is used to specify the time intervals used for plot-

ting the system unavailability. It is idetified by a keyword card

beginning with characters PLOT'. The keyword card rfst be followed by a

card containing the numberof intervals.


If the numberof intervals is negative, theplotinterval is set to
the total time period over which points are computed and no additional

cards are necessary. If this value is zero, any plot interval previously

and the default plot interval


inputare nullified is instituted.
The
default interval is given by;

max (Tli + 2 T2i + Tci + TRi)

where Tli, T2i, Tci and TRi are the first test interval, second test

interval, mean test time and mean repair time of ith components respec-

tively. Thus the default interval is the three largest test cycles of

any component, which is often sufficient for establishing the system


260

behavior.

If the default plot interval exceeds the total time period, then the

time period is used instead. If the number of intervals is greater

than zero, another card containing the beginning and the end points of
each interval is read. A maximum
of four intervals maybe specified.
Note that unlike the PRINT data set which actually activates the

system unavailability printout, the PLOT data set merely sets up the

A_
plot intervals which are to be used. Plots must be 'requested in the RUN

(explained in the next data set) data set in order for graphical output
A_
to be produced. If plots are requested in the RUNdata set, but the PLOT
data set (including keywordcard) is not input then the default interval
described above will be used. The following Table shows the variables
name and the required format for PLOTdata group.
Card Variable
Type # Columns Name Format Description

1 1- 4 ANAME (A 4) Keyword "PLOT"


2 1- 5 NPLOT (I 5) NO. of time intervals for
plotting system unavail-
abilities, (-i,0 or 1-4).
IF NPLOT < 0 then GO to next data group; otherwise;
3 1-80 (STPLT(I), Start of Ith plot intervals
(Days)
FINPLT(I), End.of Ith plot intervals
(Days)
A_
I=l, 4) (8F10.0) Max. of 4 intervals.
261

8 Data Group #8 -- RUN


This data set initiates the systemunavailability calculations.
The TITLE, LOGIC, CCM(XENTS and optionally the TIME, IMPORTANCE,PRINT,

and PLOTparameters must be set up before the RUNdata set. The RUNdata
set is identified by a keywordcard beginning with the characters '"RU'.
The "RUN"keyword card must be followed by one or more run data cards,
where each has the following parameters.
A -- Calculation accuracy number, number code identifying the number
of terms to be used in calculating the top event unavailability.
The number coded should be,
"1" if first term approximation (rare event approximation)
"3" if first 3 terms approximation.
"5" if first terms approximation.

is desired. For most evaluations use "1" because "3" and

"5' are usually time consuming.

B -- Unavailability option -- four letter code selecting the type

of unavailability to be computed where

"FAIL" means compute the instantaneous unavailability based

on contributions from component failures only (the

between tests contribution).

"TOTL" means compute the instantaneous unavailability based

on contributions from failures, testing and repair.

When the unavailability option is left blank, the default

value is 'TOTL".
262

C -- PLOTrequirement data
C.1 -- x-scale -- four letter code specifying the scaling of the

points along the x or time axis where

"NONE" means no plots are produced.

"LIN"means a linear scale is used for the time points.


'A" means that the time points are spaced at equal

intervals regardless of the actual elapsed time

between the points. This produces a plot in which


the test and repair contributions are magnified so

that the structure of the system unavailability

function is easier to see. Tho indices of the time

points are plotted along the X-axis.


'BOTH" means both "LIN"' and 'IMAG"scales are used. Two

plots are producedfor each y-scale selected.


If x-scale is left blank, the default value is "LIM'when
the unavailability when the
option is "FAIL", "BOTH"
option is "'TOL".
C.2 -- y-scale -- four letter code specifying the scaling of the

points along the y or system unavailability axis where

"NONE" means no plots are produced (may be omited if x-

scale = "NONE")

"'IN"' means a linear scale is used for the system una-

vailabilities.

"LOG" means a log scale is used for the system unavail-

abilities.

'BUIH" means both '"LIN"and "LOG" scales are used. Two


263

plots are producedfor each x-scale selected


(e.g., if x-scale = 'BOIH"
and y-scale = '"Bl-I",
four plots are produced for each time interval
specified in the "PLOT"
data group).
If y-scale is leftblank,the default value is "LIN"'when
the unavailability option is "FAIL", "LOG" when the una-

vailability option is '"rOTL".

C.3 -- plot cutoff option - power of 10 to be used as a lower


bound on system unavailability for plotting (e.g.,

-7 = 10-7). The default is no cutoff.

C.4 -- plot title -- 56 character text to appear as a plot sub-

heading in addition to the title for the case.

A negative system number indicates the end of the RUN

data group. The RUN data group variables are:

Card Variable
Type # Columns Name Format Descrition

1 1- 4 ANAME (A 4) Keyword "RUN"


2 1- 3 NSYS (13, lx Calculation accuracy No.
5- 8 QOPT ,A4, lx Unavailability option
10-13 XOPT ,A4, lx x-scale
15-18 XOPT ,A4, x x-scale
20-23 ICUTO ,I4 lx Cutoff option
25-80 TITLEZ 14 A 4) Plot title

NOTE: This data group willterminate


if a -1 is used for the
calculation accuracy number.
264

9 Data Group #9 -- SIMULATION (Optional)

This data group is used to request the simulation of the top event

distribution using Monte-Carlo technique. The SIMULATION


data groupis
identified by a keywordcard beginning with the characters "SIMU'. This
card must be followed by a card which requests the type of simulation
desired. The code can simulate the top event based on:

1) error bounds on the average unavailability of each component

(WASH 1400 type calculations) and is termed as "type 1 simula-

tiod';

2) error bounds on the component's failure characteristics, i.e.,

on failure Tate, averagetesttime, average repair time, etc.,


and is termed as "type 2 simul.ation".
-- (This routine generates each componentsparameters distribu-
tion and then evaluates the averagecomponents
unavailabilities
by random sampling from the generated parameters distributions.)
3) error bounds on the component's failure characteristics, but
evaluates the top event distribution at the specific time (s)

of operation. (Up to 5 different operational times, and is

termed as "type 3 simulation".

The third card type in this data group contains a number for Monte-Carlo

iteration, initial randomnumbergenerator and the degree of accuracy of


the evaluation. The uniform randomnumbergenerator used in this code is
the general IBMresidual value routine. The general form of the random
numbergenerator is:

xn+ ,= a xn(mod 2)
265

where
p = 31 for IBM 370

a = initial randomnuaber generator.


The fourth card type in this data group is for the component'sfailure
characteristics error factors and their respective distributions. Each
card carries information about two successive components (i.e., informa-
tion about comp. #i and i+l), The variables name and required format

used in this data group are:

Card Variable
Type # Columns `Name 'Format Description
1 1- 4 ANAME (A 4) Keyword "SInIP'
2 1- 3 NJMC (13) A control card to simu
late the type of results
desired I NJ31C 7
NJMC=1, type 1 simlation
NJMC=2,type 2 simulation
NJMC=3'-7, type 3 simula-
tion
3 1- 5 NRAND (I5, Number
of iteration for
Monte-Carlo simulation
6-20 IX 115, Initial random number
generator.
21-26 ACC F6 .0) Degree of accuracy on
the results, default is
0.5.
4 1-80 (FA(I,J),IDTS(I,J), Error factors on the
J,1,4), Component 's
failurecharacteristics
(FA(I+1,J) ,DMS (I+1,J), data and their distribu-
J=1,4) tions. The parameters
[8(F9.0,A1)] are, failure rate, aver-
age test time, average
repair time and residual
constant unavailability.
IF NJMC < 3GO TO next data group otherwise;
(cont'd....)
266

Card Variable
Type # Columns Name Format Description

5 1-50 IMC(I) (5E10.3) This card is needed if


NJMC 3. This card
should contain the time(s)
that the simulation should
be carried out (time is in
hours).

10 Data Group #10 -- END

This card terminates the computations. The card contains the


characters "END"in its first three columns. If morecalculations are
required simply modify or add the desired parameters using the appropri-
ate data groups and by-pass the ENDcard. It should be rememberedthat
once the fault tree is read in, there is no need to be input again if
the same system is being analyzed for all the subsequent cases. To get
the time dependent unavailabilities
the RUNdata grolup)shotld always
follow the changed cases.

When the "END" card is used the job will terminate and the code

will print "END OF CALCULATION BY UNRAC'. This statement signifies that

all the desired evaluations has completed successfully. Otherwise look

for misprinted and misplaced card(s).


267

C.4 References

1. MACLAREN M.D. et al. (1964) "A Fast Procedure for Generating

Exponential Random Number" Communication of ACM,7.

2. MARSAGLIA G. and BRAY T.A. (1964) "A Convenient Method for

Generating Normal Variables" SIAM Review 6.


268

C.5 On The Input Sample Fault Tree


The first step in preparing the input data for UNRAC is

to number the fault tree components and/or gates. However,

if there exists any NAND, EOR or NOR gates in the fault tree

one has to transform them to the basic AND and OR gates accor-

ding to Fig. 3.3 before numbering the events and gates.

For the sample fault tree, the AFWS is chosen. The fault

tree diagram is given in Fig. 4.4. As can be seen, the fault

tree is numbered and there exists a total of 23 basic events

(components) and 10 gates (ie. IMAX=23 and IMAXT=23*10=33).

Table C.1 shows the input data which is prepared for the

analysis. In this analysis we requested the following infor-

mation.

1) Minimal cut sets of the fault tree to be printed

out in any order ie. NSORT-1).

2) All the component importance calculations to be

carried out based on the average unavailability

of each component (ie. TYPE=ALL and TIMPOT=(Blank)).

3) Average and instantaneous unavailabilities to be

calculated based on first term approximation (ie.

NSYS=1), 12 peaks and complete instantaneous un-

availabilities of the system between zero and 60

days to be printed out (ie. NPEAK=(Blank) and

NPRINT=1).

4) Top event simulation to be carried out based on the

error factor on the component's failure characteri-


269

stics (ie. failure rate, average test time, average

repair time, etc.) and their assumed log-normal

distributions.

Table C.2 gives all the information requested. In sheet

one of Table C.2, there exist some useful information about

the completeness of the minimal cut set generator, ie.,I is

the total of minimal cut sets and II is the number of times

cut set I is being replaced by I, (for further information

see Fig. 3.1). In sheet S of Table C.2, a note is printed

out to express the values of false failure and unrevealed

fault of a monitoredcomponent. In sheet 8 of Table C.2, the


top event unavailability based on the average components un-

availability is given. In sheet 14 of Table C.2, information

concerning the error bounds, median value, standard deviation,

third and fourth moments and the values of 1 and 2 (BETA I

and BETA 2) are reported. Finally, in sheet 15 the top event

cumulative probability distribution with their respective

accuracies are given.

NOTE: The CPU time option in the code is inactive, therefore,

please disregard the 0 time consumption for the cut set gen-

eration.
270

Table C.l: LIST OF INPUT TO UNRAC FOR THE SAMPLE PROBLEM

TITLE
AUX. FEED WATER RELIABILITY ANALYSIS.
LOGIC
23 33 8 1 30
24 .0 1 25 28
25 1 28 27
26 0 2 3
27 0 4 5
28 1 29 30
29 0 7 6 9 8 10
30 0 11 12 31
31 1 32 33
32 0 13 14 15 16 17 19
33 0 18 19 20 21 22 23
COP
NEW
1 SINS. F 5. 1 0E-
OE07 L
2 CV 133 1 .000E-04 L
3 CV 131 1 OOOE-04 L
4 CV 136 1.000E-04 L
5 CV 138 1. OOOE-04 L
6 CV. TE 1.000E-04 L
MV. TUR .3 30. 10. 1.5 4.2 1.00 1 . OOOE-03 U
8 TURP.MP 30. 30. 10. 1.5 4.2 1.00 . 000E-03 U
9 NO STEM I .000E-03 L
.10. 1.5 4.2 1.0 1 .000E-03 U
1053 2 F .1, 30.
11BLD FLD 7.500E-05 L
1 .000E-02 L
12C. Pi;R F
13CONVPI 5.1 30. 20. 1.5 4.2 1.0 2.000E-03 U
14 NOPWR2 3.700E-02 L
15VP'ilP1 .3 30. 20. 1.5 4.2 1.0 1. 000E03 U
16CV. EP2 0E-04 L
. o00
17 PMP 2 30. 30. 20. 1.5 4.2 1.0 1 .OOOE-03 U
18P,'P 1 30. 30. 30. 1.5 4.2 1.0 1 .000E-03 U
1 .000E-03 L
19OPER FLT
L
2C0CV.EP1 1.000E-04 U
21MVPMPl .3 30. 30. 1.5 4.2 1.0
22NOPWRI 3. 700E-02 L
23CONVP1 5.1 30. 30. 1.5 4.2 1.0 2.000E-03 U
24DUMMY 5. 5.
-1
TI MIE
60.
IMPO
ALL

PRINT
1
0.0 60.
RUN
1 TOTL NONE NONE AUX. FEED WATER SYSTEM REL. ANALYSIS.
-1
SIMUL
2
1200 1220703125 .50
271

Table C.1 Continued

30.0 L 10. L
10. L 10.0 L
10. L 10.0 L
10, L 3. L 3. L 10. L 10. L. 3. L 3.0 L10. 1.
10. L. 3.0 L 3.0 L 10. .
30. L 3.0 I-
I,
10. L 3.9 L 3.0 L 10. L 10.
10. L 3.0 L 3.0 L 10. L 10. I.
L
10. L 3.0 L 3.0 L 10. L 10. L 3.0 L 3.0 L 10.0 L,
10. L 10. L
10.0 L 3.0 L 3.0 L 10.0 L t0.
10.0 L 3.0 L 3.0 L 10.0 L
END
272

Table C.2: RESULTS OF UNRAC FOR THE SAMPLE


PROBLEM

TABLE -1 FAULT TREE LOGIC

GATE GATE INPIUT COr.P. OR GATES


NO. TYPE

24 0 1 25 28 0 0 0
25 1 26 27 0 0 0 0
26 0 2 3 0 0 0 0
27 0 4 5 0 0 0 0
23 1 29 30 0 0 0 0
29 0 7 6 9 810 0
30 0 11 12 31 0 0 0
31 1 32 33 0 0 0 0
32 0 13 14 15 16 17 19
33 0 18 19 20 21 22 23
MINO IND LIN I I
0 0 145 146 145
273

Table C.2 Continued (sheet 2)

a TOTAL NUMBER OF CUT SET GENERATED = 145 * TIME CONSUJIED .0

TABLE -2
CUT SET NO. NO. OF COMP. IN C. S COMPONENTS NOS.
1 1 1
2 2 2 4
3 2 7 11
4 2 3 4
5 2 2 5
6 2 6 11
7 2 9 11
8 2 8 11
9 2 10 11
10 2 7 12
11 3 7 13 1R
12 2 3 5
13 2 6 12
14 3 6 13 18
15 2 9 12
16 3 9 13 18
17 2 8 12
18 3 8 13 18
19 2 lO 12
20 3 10 13 1t
21 3 7 14 18
22 3 7 15 18
23 3 7 16 1
24 3 7 17 18
25 2 7 19
26 3 7 17 23
27 3 7 13 20
28 3 7 13 21
29 3 7 13 22
30 3 7 13 23
31 3 6 14 18
32 3 6 15 18
33 3 6 16 18
34 3 6 17 18
35 2 6 19
36 3 6 17 23
37 3 6 13 20
38 3 6 13 21
39 3 6 13 22
40 3 6 13 23
41 3 9 14 18
42 3 S 15 18
43 3 9 16 18
44 3 9 17 18
274

Table C.2 Continued (sheet 3)

TABLE - 2 CONTINUED ..

CUT SET NO. NO. OF COMP. IN. C., S.. COMPONENTS NOS .
45 2
46 3 9 17 23
47 3 9 13 20
48 3 9 13 21
49 3 9 13 22
50 3 9 13 23
51 3 8 14 18
52 3 8 15 8
53 3 8 16 18
54 3 8 17 18
55 2 8 19
56 3 8 17 23
57 3 8 13 20
58 3 ,8 13 21
59 3 ·a 13 22 v
60 3 8 1323
61 3 10 14 18
62 3 10 15 18
63 3 10 16 18
64 3 10 17 18
65 2 10 19
66 3 10 17 23
67 3 '0 13 20
68 '3 10 13 21
69 3 10 13 22
70 3 1,0 13 23
71 3 7 17 22
72 3 7 14 20
73 3 7 14 21
74 3 7 14 22
75 3 7 14 23
76 3 7 17 21
77 3 7 15 20
78 3 7 15 21
79 3 7 15 22
80 3 7 15 23
81 3 7 17 20
82 3 7 16 20.
83 3 7 16 21
84 3 7 16 22
85 3 7 16 23
86 3 6 17 22
87 3 6 14 20
88 3 6 14 21
89 3 6 14'22
275

Table C.2 Continued (sheet 4)

TABLE - 2 CONTINUED:
CUT SET NO. NO, OF COMP. IN C S. COMPONENTS tOS.
90 3 6 14 23
91 3 6 17'21
92 3 6 1E 20
93 3 6 15 21
94 3 6 15 22
95 3 6 15 23
96 3 6 17 20
97 3 6 16 20
98 3 6 16 21
99 3 6 f 622
100 3 6 16 23
101 3 9 17 22
102 3 9 !4 20
103 3 9 14 21
104 3 9 14 22
105 3 9 14 23
106 3 9 17 21
107 3 9 15 20
18a 3 9 15 21
109 3 9 15 22
110 3 9 15 23
111 3 9 17 20
112 3 9 16 20
113 3 9 16 21
114 3 9 16 22
115 3 9 16 23
116 3 8 17 22
117 3 8 14 20
118 3 8 14 21
119 3 8 14 22
120 3 8 14 23
121 3 8 17 21
122 3 8 15 20
123 3 8 I5 21
124 3 8 15 22
125 3 8 15 23
126 3 8 17 20
127 3 8 16 20
128 3 8 16 21
129 3 8 16 22
130 3 8 16 23
131 3 10 17 22
132 3 10 14 20
133 3 10 14 21
134 3 10 14 22
276

2 Continued (sheet 5)
Table .C.

TABLE - 2 CONTINUED :
CUT SET NO. NO. OF COUP. IN C. S. C0OPONENTS NOS.
135 3 10 14 23
136 3 10 17 21
137 3 10 15 20
138 3 10 15 21
139 3 10 15 22
140 3 10 15 23
141 3 10 17 20
142 3 10 16 20
143 3 10 16 21
144 3 10 16 22
145 3. 10 16 23

*,. TIr,7E USED TO GENERATE THE lMIN.CUT SETS. WAS .0 SCONDS ***
*e NiOTE : I THE TABLE 3 COLU,'NS 8 , 9 REPRESENT FOUR DIFFERENT VARIABLES;
WHEN TEST INTERVAL IS ZERO THEN
COL. 8 S'P3 ( PROBAB. THAT MONTIOR COMP. NEED THOUR'OUH TESTING)
COL. 9 IS P1 (PROB. THAT THE FAIL. IS SPURIOUS)
'HEN TEST INTERVAL S NON-ZERO THE.VALUES ARE AS WRITTEN,
277
--:

.4

-J
00000000000000000000000
IIII11111 1 Iit111111 II
04
00000000000000000000000
W UW W W W W W L LUl WL l i j A LU LU WL

aUwz-
I-UJ
I.- w- -
U.

uju 000000000000000000000000
IAu.
UL-
-L
(nw
LU z
*- .4
000000000000000 00000

U
03
o I
000000000000000000000000
0 000 0 0 C 0 0:0 0 a' Q0C00 030
C0 o

I- -.
lu U)
CL L
I-.
00 00
0o
+ , +
0 0W040
0
C 0 00
0
u, I0 00 0 0000000000C00
0 C- o 0- LW W
I I*
00000000000'.
'.4
IW
D-
tr
C) II. C 1 C :" ; ; ; ; 4

4
4
2**
z
, i 0
:3
O0
0
+4
00 0 0
0+ 00+ 00 0 0

LUlW
-J
.4 Z 000000 00
N
0c o0 0
0 (0
0 U.)
00L
03

4
- L_ o o oJ JJ C0 r 0 0 0
.4
i
-AJ
a:
C.)
O.-
0 IA.Z
O
00
+ +
0
+
0c
+
0
0
+
00
00 000
0
LU
LU U. 4
00 0 0 O0 00 0 L0
:3 z eee0e0 01010 0 0 0

w Z
=Enlr
,.~td - 000000w-,-010
4w
Ir- WA 00 0
+
0
+ 0o c00 o
+ +
00
LU. I, (cI
++ o
LU tl
,,- I
>
C 00000000 000000 000 00 00
V _

00 0000
w w
0 LU 00
000
U-I,,-
4
-
Zw 0000000Y0 O O * *) oO O .
· e
aomlU) 43

q-0 U
.0r
00
++
0+ 0+ 0+ 00 0 + 4 + ++
00 00000000
000000000 0 0 00000000 00 0o
0 0 0
I.- C C) 0 C
a.t,
a)
o00 C 0)o0 C)ClL
z
o t- t0o0 0
W - 00
I
In
0 I
0 00 0t- 0tO
P
.()

-ji- 00 0
WLU
000000000000 0-o 0I 0I ! UI

000000000 0 0
W
Crj

u.
J a0 000000 (0
C00 t
wuom
o . . .
N o om0o o
. . . . . . (N
'q

i
0.W * -
U1 CLu C)(D0 Z V..LUOLU. C)
-J 0 * n0 ) - U c.
LA- - -4

I--
Z_> > > > >
.*.M
> n O
3> C C8
W
-
-

C
frU r
L
mmaLerm

.
3L >
Z
>-
e CD
,0
E-4
-
(U
a~
SAto a 2 LMu>Z00
0.00
M >OZuaO

0. _ _( _ _ _ - - - N ' N0 q4
U Z
278
J
0000000r00000
O000000CO000000.-0 l CO0 0 t.* 0 qJ
C 0 0 fl0
r-C-00 '0 C)
0O OOOOOOO0000oooo
a .I- 000000 o omoC_ L)0CV)0Cr))

-J r- v q v vJ Iqq I cr NC C C '7 '7J Ctt7 'cr C)


4-]
11 111 III II I II I I I II

000000- - OO XO O CO°
)O
UOOOOO-O- 0000 b-¢)

O
000000000000000000000000
1.-j 31 000000000000000000000000
0W-

=)

Z
00 000000000
e ee. e. .e . .e e .e e .t o
00 00 00000
.ee. . . . .e . .e.
0 .
-J O0C00OO0OC>00000O 0000000000O00o O )

J q 0O - (D --O q W
0000000o 0o0o)0o 0 0 ) 000
o) 0o 0 o
O C
000000000000000000000000
Jdjbj~jJJJ~;JJJJJJJJJJJJJ
I*

0
0oo
Z I-
I
0 0I 0I oI 00
II
0I 0i
a- >M- C:
4C Lu 4C
.
z Dui 0a 00 0 0C
Jo C
C
O V CD iq tD -7

-i
I, OC 0 0 v O CO:
0'00o00 O C0
Co . 0 00CO O1 0
I-.
4 JJJ~4dGbjJJJJJJ
4 tL4c
0
00000O)0Cl0
OOO-CD
IO
00000(00 D c O

.J
z vt 0 to - Co
C U) -o
4
I-
C
C CVC) C) C')
,Z
00 0 0 Mr)
0I C)00 0I 0I
Z3WZ
> WUJ W U W
LU
I I
W't'j
UJ 1J IO
I-
2 C w (A
z MJ Ll
00 00 0 _ N
000000 0 c0
c-
a- z
U
,*
CO
LU
,* 0 0 0 0 0 0 C COCN 0 N 0 Cl 0 C O O COVN O
,*
0
U.4000000-0000'-0
0)- M co tl tIn
lO ClO&00
aC
'0
0
o(
I.-
3 Ao
0 0O
o
OC')
Ul O -0000
I o- Moo r-C o C'
iI.

7 CN C

. O
00 0 0
I
0I 00
IW
0 0 ,
I I I LU LU -i
C : U a)
0 '70 N
00000000 )00-c0 JjJ(j 000 co
1o
D O O0 a,
000000--OC C; C
V
UL.
O-' *. C; * , 1 C
-
4J
A
0la
J 0000000000000000000 0000
III
WJ44
uI I II II I I I II I I II II
o-o
04 000000OOIo0o0mOOO
00oo000oo'ooCo'O woo
o C0
O C4)
:3
'Z _OOO00000MCONlOOO1'NOC''0 oO'-o .OG.
ku. - . ._ -. .- - JV. . - . Ml.. t- . - . ._ M. M_
. . . on o .H
4i
C:
C14
I
.L-C 1) -CW Z 0.LU OL Cl
U
-C UZ Z >>: :: C: ) O. . CL . L 4)
I- UnO0-ZC-z
DUnoC *0 mCuM0.0O>a > > C 0 )
r-A
.0

OzZ - - - - -- CNN" Cl
279

Table C.2 Continued (sheet 8)

INPUT DATA FOR IMPORTANCE ANALYSIS


TYPE OF CALCULATION a ALL
TIME OF EVALUATION a 0.0
AVERAGE OR POINT ESTIMATE TOP EVENT UNRELIABILITY ISa 3.14970-04
280

Table C,2 Continued (sheet 9)

TAB.LE - 5;

FUSSE LL-VES ELY AND BI RN AUM'§S' MA SURE' OF IMPORTANCE

COMPONENT FUSSELL-VESELY BIRNBAUM'S


* 140, MEASURE MEASURE

1 1.6192E-03 1 .E0000E+00
2 6.3498E-05 2.0000E-04
3 6.3498E-05 2,OOOOE-04
4 6.349 3E-05 2. 0000E-04
5 6.3498E-05 2.0000 E-04
6 4.6507E-03 1 .4660E-02
7 1 .5392E-01 t .4680E-02
8 6.4245E-01 1 .4680E-02
9 4.6607E-02 1 .4680E-02
10 1.5061E-01 1.4680E-02
11 5.1001E-03 2. 1419E-02
12 6.8001E-01 2.1419E-02
13 2.4262E-02 1.2867E-03
14 1.5115E-01 1 .2887E-03
15 1 .3235E-02 1 .2867E-03
16 4.0850E-04 1.2867E-03
17 5.6086E-02 1 .2867E-03
18 5.6619E-02 1.2853E-03
19 6.8001E-02 2.1419E-02
20 4.0807E-04 1.2853E-03
21 1 .298;E-02 1 .2853E-03
22 1 .5098E-01 1.2853E-03
23 2.4139E-02 1 .2853E-03
281

0000
000 U
IC 000
+;oC1 QC
e
+; +

In0 Ln0ou
If

0000
0000

00000
qC1q
0 0 O
O UV tlO
. . . .

I * *
C'J0 N I N
H
N0
- 0
I

1-i

9- ICI 3
0-j 00000
I.
I* + ++ + +
WWWWW
0
r-4
I
a: -I IPl
w
0000
00-InC0
i
N 0 N ii I) 4J
s * *I e 0
N
I.- I-
Z 0
C&. vi)
OW W
ow w CANN t la
0
OX0
Wa 0:
D V
PU:)
I: 4J

1-4
00 z 0
0000
++ + +
I C: I
00 -0a 0
0
a. oI t OI
-l arw
ul W C;O i -:C- r-I
I.- E0
X4
E--4
I-- I-
282
H-

.0
a
0a
a

0
t~
r
V1
ULL
Ow
0 u
t- . 0.
Uo
(o
Z LU
0 022
I- rU
0. 0
0 Id
P.-
0)( 20
=J
2
*
*I. J .L4
*, 0
<0 -.
In I-
- Z
0 0I-
:-
-J
_! ) o0
.4 I.-
C- Z 4
- U :
UC
I.. 0.
ID
wt
(4
UZ
UZ
I 0
I 0
I · -J 1
4
I-
0: 3
I< I 0I-0 v'
l -
I
I.- 0
0 U) -J 00000 0000000
Ut 6- I I I
-4 z I W14 LW u 1JW U 1LUW Ul W U 1u
0! U) I ioo <- - WU .- > ) (O c
O
9- 0 n c0· m In r -tD 11) COD C
:
I, >
-4
I
10
4WD A' o 0i c0 otDt
0-00wwNw)(CD
t, IZ '

0)
1D v-00w C '459 ('CdCI CdC
I l
zr I1W" I
4
I I
N
I ..
IW wU
000000000000
F'
+- t + ++ + ++ + +
.na o.4
tr l J 01- ID F-) 0 dooo
Looooooo
C
w 0 *o I
:c _ o'u- 1 a'i) Cto o ou4
- I J
U) I
U) Iz II <W
1, o )~t J
0 LU I Zc

WO.
> ,.
_
I ,c
I < LuJC
- 000 000000000
++++++++++++
·· 3 1Z E - J W Wj W WWW
W iJ u tW W
I Z
>- U. C Li. -0 D tD (D tO 0 FC) O00
OO
I
I
..
U I
I
w I 000 4C
000C; 0c
0000000010000
I3I C~
w
LA.
I1, Z 1II
31
I b
. )... I. . . . . .(4
LI I t I I (-
aIw I 0 1(A
IIO II
AC tz I I I /
4 I.
130 , I D I 3
3- I I I U I
33D2 I 0-a I I 0.
I
I OW Z ( .NID0Cd 0- U- _-~(-
' - Cd
v- 9*
3-2
0
2
283

4tW
4 -

,.
0 fl
0 ft 'C C W v Cdt3 t )o _ D I4 CD
r t 1(
WI- '?C- 0 ? d 0-N
W W0 - 0 t CL rC IM - C
o
. - 4 P ; c; . c; u t -" c n; ; r r ; ; r
tui
cr
0 -
-IZ 0000000000000000000000 0
> LU '- 0 0 N r V. 0D Iz n CO CD C' 0 1
0
CdW - UJ C 0 UJW - C W Ul
0
C4
.Z4.
:)Z I.,
WD-'
{D r (DN t
*- c d
WM-C
ncr.
-
-
CD'? C- v- c- 0 cdO- cl 0 ' o0
.- n1> 1"qa
t"t at --.o
CF
0
W

c)
In II

-J
00000000000000 a
CO L; 4U)
L UJ
I
0000000
UJ WMULLU
vU
0 ' Cq C Nd 0-
W- M -' VI 0
I
In C 0. 0
4
CD VO 0 m N U
00C
O-0-tD 00) (4
V ) U) CQ
0 U) 0t 00 CD
01 .J
00
'C
a > 4+
Z- C o0
M0 N®ee$~eeee/oe~~
Cd v NN -N N
c; >4
U.O
40:
e
CO
0e U) - ~ ~ ~ ~ ~ ~1r~~
00Io 0
0
C.
0C9
Z o
I-
4 0-J
000000 00000000000000 00 l I
U. LWW ULU U) W UW UJU U L LUW UWW LW W I
0 0 0 I)I CD
I 0t t- 0 O
0 - U
C, 0 CD U)
N t c) N-IT
N ? CD01 ) I
CI
O00(DO m)
C-
-
-- _) 00
l l
I us
z _C - e0 D0 0* 0 C U)
oO
N0 I0 0'
) C- _-D - oC D
~>b
:) i
'-4
0 V 1I

0 a1
E-i
0000 CN
00000
NC
0000
N CNN N N
++ +
000000000 .J C
Z
1. 33-C) 000000 00 0 tf 00 00000InC00
0000 00V
U L LU
O c0- 00-In 00
uJ IU LJ W I u L W W uW
uJ
-4 0
O ((
3 O O-U, C Cd' 0'-M00001v
W00C0 ClCC-C-0 .
I-
0 N C4 4
0CI'?oC
eeeeea~ MO m - - -e--
I -:
bI
U W,
Intl 1-4
'Sa
W
0
I4t i0
IzU w
I,-
.J >')
-i
o000000 00 0 00 o o00 00 Z

,4 - oo00C00C00CD000CD00000D00 l-a
W
m -- 0000 00 0 00 000 000N 0 W
(v
Cl
C-
W
a On'-5'odno ob C')C1a0'?C?'?'?Lo 6 66inlCD I
-J 4c I 0
cl u] I
cI1- II
.I 0

t/- U
C
A:) . C Cd I-I
w I I- E Z cD
(n
U) O cl I NI
z U) I
284

Table C.2 Continued (sheet 13)

AUX. FEED WATER RELIABILITY ANALYSIS.

NUMBER OF COMPONENTS 23
NUt.8ER OF TRIALS a 1200

*** COMPONENT VALUE

COMPONENT PARAMETER SPREAD & DISTRIBUTION


NO. LAMDA DIS TC DIS TR DIS QRESID DIS
1 0.0 * 0.0 0.0 30.000 I
2 0.0 0.0 0.0 10. 000 L
3 0.0 0.0 0.0 10. 000
4 0.0 0.0 0.0 10.000
5 0.0 0.0 0.0 10. 000
6 0.0 0.0 10.000 L
7 10.000 L 3.000 L 3.000 L 10.000 L
8 10.000 L 3.000 L 3.000 L 10.000 L
9 0.0 0.0 0.0 10.000 .L
10 10.000 L 3. 00 L 3.000 L 10.000
11 0.0 0.0 0.0 30.000 L
12 0.0 0.0 0.0 3.000 L
13 L 3.000 L 3.000 L 10.000 L
14 0.0 0.0 0.0 10.000 L
i
L
15 10.000 L 3.000 L 3.000 L 10.000 L
L
16 0.0 0.0 0.0 10.000 L
L
17 10.000 L 3.000 L 3.000 L 10.000
18 10.000 L 3.000 L 3.000 L
L 10.000 iI
i
19 0.0 0.0 0.0 10.000 i
i
20 0.0 0.0 0.0 10.000 i
21 10.000 L 3.000 L 3.000 L 10.000
22 0.0 0.0 0.0 13.000
23 10.000 L 3.000 L 3.000 L 10.000
285

Table C.2 Continued (sheet 14)

S** AUX. FEED WATER RELIABILITY ANALYSIS.

DISTRIBUTED VALUES:

MEDIAN POINT VALUE = 3.1497E-04


TRUE MEDIAN * 5.9415E-04
ERROR FACTOR · 7.2066E+00
MEAN = 1.2970D-03
STANDAR3 DEVIATION = 3.2431D-03
5% EOjND = 1.3432E-04
95% 3uN0D a 4.2818E-03
BETA 1 · 1.63760+02
BETA 2 a 2.3341D+02
THIRD CENT. MOMENT= 4.3648D-07
4TH CENT. MOMENT= 2.58190-08
286

000000000000000000000
0 It I I 0 a a
I t aII I C- N
a aC I

0
aa
oo0000000000000.00000
I- I I I I I I I I I I I I I I I I

*a c a _ o . _- o 0 Aa _ i A uI A a

0000000000000000000000
CD C Vr r o rC 0; 0; 0; 0; 0n ; : 0 r O N tD r

*in . .

" 000000000000000000000

wr
w z
to
' S 0D
o

.i-. wIIIIiwww
. . .. ..xIwwexw
OO. IIio.

a a)
LU WO 0mQ U C d OOOO)O OUa00,
2
t
287

Table C.2 Continued (sheet 16)

END OF CALCULATION B UNRAC


____ -- atom_… _
Room 14-0551
77 Massachusetts Avenue
Cambridge, MA 02139
MITLibraries Ph: 617.253.5668 Fax: 617.253.1690
Email: docs@mit.edu
Document Services http://libraries.mit.edu/docs

DISCLAIMER OF QUALITY
Due to the condition of the original material, there are unavoidable
flaws in this reproduction. We have made every effort possible to
provide you with the best copy available. If you are dissatisfied with
this product and find it unusable, please contact Document Services as
soon as possible.

Thank you.

Pages are missing from the original document.

r9AI ' - atl

You might also like