Eth 41460 02
Eth 41460 02
Eth 41460 02
Doctoral Thesis
Author(s):
Nishijima, Kazuyoshi
Publication Date:
2009
Permanent Link:
https://doi.org/10.3929/ethz-a-005762828
Rights / License:
In Copyright - Non-Commercial Use Permitted
This page was generated automatically upon download from the ETH Zurich Research Collection. For more
information please consult the Terms of use.
ETH Library
DISS. ETH NO. 18238
A dissertation submitted to
ETH ZURICH
Doctor of Sciences
presented by
Kazuyoshi Nishijima
born 13.08.1978
citizen of
Japan
2009
-2-
Acknowledgements
"You are one of the freest PhD students in the world," my supervisor Professor Faber
once said to me - and yes, I am. He gave me this topic and motivated me to conduct
this research work. However, during the course of the work he never gave me any
"fixed" assignments. Instead, he gave me the opportunity to explore different research
disciplines. At the same time, whenever necessary, he gave me exceptional support,
both professionally and personally. Here, I express my deep gratitude to him.
My colleagues at the group of Risk and Safety have created a friendly, pleasant
atmosphere, conductive to research work. It is thanks to them that I could always
concentrate on the research work. Among others, I express my wholehearted thanks to
Matthias Schubert, who was my office mate over the last few years. Whenever I
encountered technical or personal problems, he generously gave of his time to discuss
them with me: he understood the problems and provided me with useful advice. I also
express my gratitude to Patricia Meile, who is a secretary in our research group. She
constantly supported me in setting up and improving the working environment,
including administrative issues such as acquiring working permission for a foreign
student. Dr. Daniel Straub, who is a senior ex-colleague in the group, greatly
influenced my motivation towards research and my research style, directly and
indirectly, which I appreciate very much.
I also want to thank all my friends and especially my family for their support. Their
support has encouraged me to continue with the PhD work. This work is dedicated to
them.
Finally, my gratitude goes to Professors Bretschger and Lind for acting as external
examiners and providing valuable comments.
Zurich, 28.02.2009
Kazuyoshi Nishijima
-3-
-4-
Abstract
Sustainable societal development has become a subject of increased and widespread
societal attention especially during the last two decades. The tremendous economic
development of former developing nations such as China and India and the general
impact of globalization have put even larger pressures on our limited natural resources
and fragile environment. Faced with an ever increasing amount of evidence that the
activities of our own generation might actually impair the possibilities for future
generations to meet their needs, it has become a major political concern that societal
development must be sustainable. The issuing of the famous Brundtland report “Our
Common Future” (1987) formed a political milestone. This important event has
enhanced the public awareness that substantial changes of consumption patterns are
called for and has further significantly influenced research agendas worldwide.
Motivated by this and focusing on the civil engineering sector, the present thesis has
two aims. The first aim is to reformulate the classical life-cycle cost optimization
concept, which has been advocated in civil engineering as the decision principle, in
such a way that relevant aspects of sustainability can be incorporated into engineering
decision-making. The aspects of sustainability considered in depth in this
-5-
reformulation are intergenerational equity and allocation of limited resources.
Furthermore, for the purpose of facilitating the applications in practical decision
situations, a platform is proposed for the modelling and optimization of decision
problems based on Bayesian probabilistic networks. Thereby, it is possible with the
proposed platform to consider the constraints relating to societal sustainability posed
by present society in the decision problems. The second aim is to present a
fundamental approach for incorporating the reliability of civil infrastructure in general
economic models so that the sustainable policies on design and maintenance of civil
infrastructure can be identified from a macroeconomic perspective.
In the present thesis, two types of engineering decision analyses are differentiated in
order to clarify the extent of the consequence of decisions; marginal engineering
decision analysis and non-marginal engineering decision analysis. In marginal
engineering decision analysis, it is assumed that the economic growth path is
exogenously given and the consequence of decisions does not affect the economic
growth; the life-cycle cost optimization concept corresponds to the marginal
engineering decision analysis; the first aim of the present thesis can be regarded as the
formulation of engineering decision problems from a sustainability perspective in the
context of the marginal decision analysis. In contrast, non-marginal decision analysis
considers the change of economic growth as a consequence of decisions; the second
aim of the present thesis can be regarded as a proposal for a decision framework for the
non-marginal engineering decision analysis.
The present thesis consists of eight chapters. Chapter 1 introduces the background,
aim, scope and outline of the thesis. A literature survey is also provided in the fields of
economics and civil engineering, where the formulation and optimization of
sustainable decision making in civil engineering is dealt with. The core of the present
thesis consists of six chapters (Chapters 2 to 7). Each of the chapters, except Chapter 7,
represents a part of my research work published during the PhD study. Chapter 2
considers the general treatment of uncertainties in engineering decision analysis, which
is the philosophical basis for decision-making subject to uncertainties. Chapters 3 to 5,
respectively, investigate the modelling and optimization of sustainable decision
problems, the issue of intergenerational equity and the issue of allocation of limited
resources in the context of marginal engineering decision analysis. In Chapter 6 the
approach for incorporating the reliability of civil infrastructure in general economic
models is proposed based on economic growth theory. This approach corresponds to
non-marginal engineering decision analysis. The proposed approach is then applied to
a simplistic economic model in Chapter 7 in order to show how the optimal reliability
of civil infrastructure can be identified and the sustainable policy on the design and
maintenance of civil infrastructure can be examined. Thereby, an objective function is
derived in the context of non-marginal decision analysis that is different from the
objective function employed in the classical life-cycle cost optimization concept. The
reason for this is provided by looking at the differences in the formulation of the
-6-
decision problems in marginal and non-marginal decision analysis. In this chapter the
assumptions of the derivation of the classical life-cycle cost optimization and its
limitations are also introduced in order to emphasize the difference between
non-marginal decision analysis and marginal decision analysis. Chapter 8 concludes
the present work.
-7-
-8-
Zusammenfassung
Die Frage nach einer nachhaltigen gesellschaftlichen Entwicklung hat insbesondere in
den letzten zwei Jahrzehnten zunehmend an Bedeutung gewonnen. Im Fokus stehen
dabei die begrenzten natürlichen Ressourcen und die fragile Umwelt, die durch die
enorme wirtschaftliche Entwicklung von Schwellenländern wie China und Indien noch
stärker unter Druck geraten. Da es immer offensichtlicher wird, dass die Aktivitäten
unserer eigenen Generation die Entwicklungsmöglichkeiten der folgenden
Generationen beeinträchtigen könnten, wurde die Forderung nach einer nachhaltigen
gesellschaftlichen Entwicklung ein wesentliches politisches Ziel. Ein politischer
Meilenstein wurde 1987 durch den Brundtland Report "Unsere gemeinsame Zukunft"
gesetzt. Dieses entscheidende Ereignis verstärkte das öffentliche Bewusstsein, dass
substantielle Änderungen im Konsumverhalten zukünftig notwendig sind. Seit der
Veröffentlichung des Brundlandt Reports beeinflusst das Thema der Nachhaltigkeit
weltweit viele Agenden von Forschergruppen.
Die Entwicklung eines solchen Rahmens ist die relevanteste Aufgabe, die die Forscher
im Bereich der nachhaltigen Entscheidungsfindung zu bewältigen haben. Es ist nicht
abzusehen, dass in naher Zukunft in diesem Bereich eine Lösung gefunden wird.
Dennoch ist derzeit der Druck gross, Methoden zur Verfügung zu haben, die es
Entscheidungsträgern aus allen Bereichen ermöglicht, die "nachhaltigste"
Handlungsalternative zu identifizieren. Der Ausdruck " nachhaltigste" impliziert, dass
die Handlungsalternativen konform sind zu den Massnahmen, Regulierungen,
Prinzipien, Ethiken und allen anderen Gegebenheiten in einer Gesellschaft, die als
"beste Praxis" für die Umsetzung der nachhaltigen Entwicklung in einer Gesellschaft
gelten.
-9-
Diese vielschichtigen Aspekte waren die Motivation für diese Arbeit, die sich auf den
Bereich des Bauingenieurwesens bezieht. Zwei wesentliche Ziele werden in dieser
Arbeit verfolgt. Das Erste ist, den klassischen Ansatz des Konzeptes zur Optimierung
der Lebenszykluskosten, der im Bereich des Bauingenieurwesens als das
Entscheidungsprinzip betrachtet wird, so umzuformulieren, dass Aspekte der
Nachhaltigkeit im Entscheidungsprozess Berücksichtigung finden können. Die Aspekte
der Nachhaltigkeit, die insbesondere Berücksichtigung in der Neuformulierung finden
sind das Prinzip der intergenerationellen Gleichheit und der Allozierung von
beschränkten Ressourcen. Für die Anwendbarkeit in realen Entscheidungssituationen
wird eine Plattform für die Modellierung und Optimierung von
Entscheidungsproblemen vorgeschlagen, die auf Bayes'schen Probabilistischen Netzen
basiert. Dies ermöglicht es, die Einschränkungen, die durch die Aspekte der
Nachhaltigkeit gegeben sind, im Entscheidungsprozess zu berücksichtigen. Das zweite
Ziel ist, einen fundamentalen Ansatz vorzustellen, der es ermöglicht, strukturelle
Zuverlässigkeit von baulichen Infrastrukturen in allgemeinen ökonomischen Modellen
zu berücksichtigen, so dass nachhaltige Entscheidungen in Bezug auf den Entwurf und
den Unterhalt solcher Anlagen von einer makroökonomischen Perspektive aus
identifiziert werden können.
Die vorliegende Arbeit gliedert sich in acht Kapitel. Kapitel 1 stellt die Ziele der Arbeit
vor, grenzt die Arbeit ab und erläutert die Hintergründe zu dieser Arbeit. Im ersten Teil
wird ein Überblick über die Literatur in den relevanten Gebieten der
Wirtschaftswissenschaften und des Bauingenieurwesens, insbesondere in den
Bereichen Formulierung und Optimierung von nachhaltigen Entscheidungsproblemen,
gegeben. Der Kern dieser Arbeit besteht aus sechs Kapiteln (Kapitel 2 bis 7). Jedes
dieser Kapitel (mit Ausnahme von Kapitel 7) repräsentiert einen Teil meiner
Forschungsarbeiten während des Doktorats, die bereits veröffentlicht sind oder zur
Veröffentlichung akzeptiert sind. Kapitel 2 behandelt den allgemeinen Umgang mit
-10-
Unsicherheiten in der Entscheidungsanalyse im Ingenieurwesen und stellt die
philosophische Basis für die Entscheidungsfindung im Ingenieurwesen unter
Unsicherheit dar. Kapitel 3 bis 5 untersucht die Modellierung und Optimierung von
Entscheidungsproblemen unter Berücksichtigung der zuvor genannten Aspekte der
Nachhaltigkeit. Kapitel 6 stellt einen Ansatz vor, mit dem die strukturelle
Zuverlässigkeit baulicher Infrastrukturen in allgemeinen wirtschaftswissenschaftlichen
Modellen und Modellen zur Beschreibung des Wirtschaftswachstums berücksichtigt
werden kann. Dieser Ansatz korrespondiert zu nicht-marginalen
Entscheidungsanalysen. In Kapitel 7 wird dieser Ansatz an einem einfachen
wirtschaftswissenschaftlichen Modell angewendet, um zu zeigen, wie die optimale
Zuverlässigkeit baulicher Infrastrukturen identifiziert werden kann, und eine
nachhaltige Strategie in Bezug auf den Entwurf und den Unterhalt verfolgt werden
kann. Dazu wird eine Zielfunktion in einem nicht-marginalen Kontext hergeleitet, die
grosse Unterschiede zur Zielfunktion aufweist, die im klassischen Ansatz zur
Optimierung der Lebenszykluskosten verwendet wird. Der Grund für diese
Unterschiede liegt in der Formulierung des Problems im marginalen und im
nicht-marginalen Entscheidungsraum. In diesem Kapitel wird auch auf die klassischen
Annahmen und Einschränkungen eingegangen, um die Unterschiede in diesen beiden
Ansätzen beleuchten zu können. Kapitel 8 schliesst die Arbeit.
-11-
TABLE OF CONTENTS
ACKNOWLEDGEMENTS ................................................................................................................... 3
ABSTRACT ............................................................................................................................................ 5
ZUSAMMENFASSUNG ........................................................................................................................ 9
1. INTRODUCTION ....................................................................................................................... 16
1.1. RELEVANCE...................................................................................................................... 16
1.2. AIM OF THE THESIS........................................................................................................... 17
1.3. SCOPE OF THE THESIS ....................................................................................................... 19
1.4. STATE OF THE ART IN RELEVANT RESEARCH TOPICS ........................................................... 20
1.4.1. Structural performance of civil infrastructure.................................................................. 21
1.4.2. Socio-economic role of civil infrastructure ..................................................................... 23
1.4.3. Implication and formulation of sustainability .................................................................. 25
1.5. OUTLINE OF THE THESIS ................................................................................................... 27
ABSTRACT ...................................................................................................................................... 31
2.1. INTRODUCTION ................................................................................................................ 31
2.1.1. Aleatory and epistemic uncertainties ............................................................................... 32
2.1.2. Probabilistic modeling approach in practice .................................................................... 33
2.2. GENERAL PRINCIPLES FOR THE PROBABILISTIC MODELING OF EVENTS SUBJECT TO
ALEATORY AND EPISTEMIC UNCERTAINTY ....................................................................................... 34
2.3. EXAMPLES ....................................................................................................................... 36
2.3.1. N-year maxima ................................................................................................................ 36
2.3.2. Return period ................................................................................................................... 39
2.3.3. Hazard curve ................................................................................................................... 41
2.4. DISCUSSION ..................................................................................................................... 42
2.5. CONCLUSION.................................................................................................................... 44
2.6. APPENDIX ........................................................................................................................ 44
ABSTRACT ...................................................................................................................................... 47
KEYWORDS .................................................................................................................................... 47
3.1. INTRODUCTION ................................................................................................................ 47
3.2. PROBLEM SETTING ........................................................................................................... 49
3.2.1. Modelling of complex systems ........................................................................................ 49
3.2.2. Bayesian hierarchical modelling...................................................................................... 50
3.2.3. Optimization of engineering decisions under constraints ................................................ 51
3.2.4. Objective of proposed approach ...................................................................................... 52
3.3. PROPOSED APPROACH ...................................................................................................... 53
-12-
3.3.1. Hierarchical system modelling with Bayesian probabilistic networks ............................. 53
3.3.2. Objective function and constraints .................................................................................. 55
3.3.3. Optimization of actions for components of complex system ........................................... 56
3.4. EXAMPLE 1 ...................................................................................................................... 56
3.4.1. Model description ............................................................................................................ 57
3.4.2. Results ............................................................................................................................. 61
3.4.3. Discussion ....................................................................................................................... 63
3.5. EXAMPLE 2 ...................................................................................................................... 64
3.5.1. Optimization of target reliability for welded joints in components ................................. 66
3.5.2. Results and discussion ..................................................................................................... 68
3.6. CONCLUSIONS .................................................................................................................. 68
ABSTRACT ...................................................................................................................................... 71
KEYWORDS .................................................................................................................................... 71
4.1. INTRODUCTION ................................................................................................................ 71
4.2. MULTI-DECISION-MAKERS AND CRITERIA FOR SUSTAINABILITY ........................................ 72
4.3. EQUIVALENT SUSTAINABLE DISCOUNT RATE ..................................................................... 75
4.4. EXAMPLE ......................................................................................................................... 76
4.4.1. Cost distribution over time .............................................................................................. 77
4.4.2. Optimization of the concrete cover thickness .................................................................. 80
4.5. DISCUSSION ..................................................................................................................... 82
4.6. CONCLUSIONS .................................................................................................................. 83
4.7. ANNEX A ......................................................................................................................... 84
ABSTRACT ...................................................................................................................................... 87
KEYWORDS .................................................................................................................................... 87
5.1. INTRODUCTION ................................................................................................................ 87
5.2. BUDGET MANAGEMENT APPROACH .................................................................................. 88
5.2.1. Resource allocation ......................................................................................................... 88
5.2.2. Net benefit maximization ................................................................................................ 89
5.3. EXAMPLE ......................................................................................................................... 90
5.3.1. Maintenance planning for a portfolio of RC structures.................................................... 90
5.3.2. Inspection, repair and failure ........................................................................................... 91
5.3.3. Probabilistic corrosion model .......................................................................................... 92
5.3.4. Cost model ...................................................................................................................... 93
5.3.5. Numerical results ............................................................................................................. 94
5.4. DISCUSSIONS ................................................................................................................... 96
5.5. CONCLUSIONS .................................................................................................................. 97
-13-
6. SOCIETAL PERFORMANCE OF INFRASTRUCTURE SUBJECT TO NATURAL
HAZARDS (PAPER V) ........................................................................................................................ 98
ABSTRACT ...................................................................................................................................... 99
KEYWORDS .................................................................................................................................... 99
6.1. INTRODUCTION ................................................................................................................ 99
6.2. PROBLEM SETTING ......................................................................................................... 101
6.3. ROLE OF INFRASTRUCTURE IN ECONOMIC CONTEXT ....................................................... 102
6.4. PROPOSED METHODOLOGY ............................................................................................. 104
6.4.1. Definition of infrastructure failure................................................................................. 104
6.4.2. Equation of capital accumulation .................................................................................. 105
6.5. ILLUSTRATIVE EXAMPLE ................................................................................................ 106
6.6. DISCUSSION ................................................................................................................... 108
6.7. CONCLUSION.................................................................................................................. 109
-14-
-15-
Introduction
1. Introduction
1.1. Relevance
Sustainable design and maintenance policies on civil infrastructure have become a
relevant subject in both developed and developing countries. Many developed
countries are presently experiencing severe deterioration of older infrastructure.
Developing countries are repeatedly faced with the losses of infrastructure due to
natural hazards. In addition, these countries continuously suffer from losses of
infrastructure due to deterioration that arises from the lack of appropriate maintenance
work.
1 These numbers are calculated based on the statistics provided by EUROCONSTRUCT (2007).
2 The average over Austria, Belgium, Czech republic, Denmark, Finland, France, Germany, Hungry,
Ireland, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland and
the United Kingdom, which are included in EUROCONSTRUCT (2007).
-16-
Introduction
3 Adjusted to US dollar in 2003. The same applies in the following unless otherwise stated.
4 GDP in the previous year of the hazard event occurrence.
5 Note that economic loss induced by Hurricane Katharina in 2005 is estimated at US$125 billion,
Munich Re (2005). However, this amounts to only slightly more than 1% of the GDP of the United
States in 2004, i.e. US$11 trillion (World Development Indicators Database, World Bank).
-17-
Introduction
Motivated by this and focusing on the civil engineering sector, the present thesis has
two aims. The first aim is to reformulate the classical life cycle cost optimization
concept advocated in civil engineering as the decision principle, in such a way that
relevant aspects of sustainability can be incorporated in engineering decision-making.
The relevant aspects of sustainability considered in this reformulation are
intergenerational equity and allocation of limited resources. Furthermore, for use in
practical decision situations, a platform is proposed for the modelling and optimization
of decision problems based on Bayesian probabilistic networks. The proposed platform
enables one to consider the constraints dictated by society in terms of, e.g., regulations
for the realization of the sustainable development of society. The second aim is to
provide a fundamental approach for incorporating the reliability of civil infrastructure
in general economic models so that the appropriate policies for design and
maintenance on civil infrastructure can be identified in the context of macroeconomics.
To achieve these aims systematically and also to facilitate a clear focus on individual
problems, the following four issues are identified. In the present thesis, each of these
issues is investigated individually.
Issue 1: Uncertainties
Decisions involving design and maintenance policies on civil infrastructure must be
made subject to significant uncertainties. These uncertainties are associated with the
randomness of natural phenomena such as the physical process of material
deterioration, a change of the environment surrounding the infrastructure and the
occurrence of natural hazards, in two ways. Firstly, the randomness of nature itself is
one of the uncertainties (aleatory uncertainty). By definition, this type of uncertainty
cannot be reduced. Secondly, modelling the characteristics of the randomness of nature
constitutes the other type of uncertainty (epistemic uncertainty). In principle, this type
of uncertainty can be reduced by a better understanding of the phenomena; however,
although some of the epistemic uncertainties may be reduced by merely collecting
more information, for others a reduction may not be possible in the foreseeable future.
Both types of uncertainty are relevant to decision problems when looking at the choice
of optimal policies, and they must be consistently taken into account in the decision
problems.
-18-
Introduction
-19-
Introduction
society, e.g. individuals, communities, scientists and politicians. Therefore, the present
work, which focuses on engineering decision analysis, does not directly discuss these
topics, but instead relies on relevant related research works presently available. The
state of the art in these topics is briefly summarized in the next section, in addition to
research work on the structural performance of civil infrastructure and the
socio-economic role of civil infrastructure.
The present thesis defines two types of engineering decision analysis; marginal
engineering decision analysis and non-marginal engineering decision analysis. An
engineering decision is marginal if the consequence of the decision does not influence
the economic growth of society. As will be discussed in Section 7.2 this condition is
the assumption required for the application of the life-cycle cost optimization concept.
The marginal decision analysis is thus most suitable e.g. for decision situations in
which: private firms optimize individual engineering projects under constraints such as
budget constraints and regulations imposed by authorities; societal decision-makers
optimize the allocation of given resources in a portfolio of public engineering projects
in which the benefits from the projects are not reinvested into capitals but are
consumed. In contrast, an engineering decision is non-marginal if the consequence of
the decision affects the economic growth. An important example of a non-marginal
engineering decision is code-making for civil infrastructure; a higher acceptance
criterion for human safety imposes higher construction and maintenance costs on civil
infrastructure, which results in a smaller rate of capital accumulation.
The scope of the present thesis is thus to investigate the issues mentioned in the
previous section in these two contexts; Issues 1 to 3 in the context of marginal
engineering decision analysis and Issue 4 in the context of non-marginal engineering
decision analysis.
-20-
Introduction
Whereas some attempts were made to base structural performance on probability (see
Mayer (1926), Wierzbicky (1936) and Freudenthal (1947)), this important concept was
clearly formulated by Freudenthal (1954), wherein the failure and unserviceability of
structures are defined with due consideration given to uncertainties associated with
both loading on and the resistance of structures. Subsequently, the theory was extended
in many directions, which presently constitute the structural reliability theory. The
so-called second-moment concept gained its reputation at an earlier stage in the
development of structural reliability theory. This concept does not assume the form of
a probability distribution function to measure the reliability of structures (reliability
index), but only requires the first two orders of moments of the random variables that
characterize the reliability of structures. Due to this relatively simple way of measuring
the reliability, and also enhanced by the work by Cornell (1969), the concept was
widely accepted.
However, for the same reason, the concept has several disadvantages. One of the most
significant disadvantages is that the reliability index measured in accordance with this
concept is not invariant; the measured reliability index can differ in the algebraic
reformulations of the equations that mathematically represent the failure of structures,
i.e. limit state functions. This "invariance" problem was solved by Hasofer and Lind
(1974) with the introduction of the geometrical definition of reliability index.
Thereafter, a number of its extended variants have been proposed to incorporate more
information on the distributions of the random variables that characterize the reliability
of structures, e.g. the first order reliability methods (FORM) and the second order
reliability methods (SORM), see Ditlevsen and Madsen (2005) for an overview.
Other extensions are directed at application to the analyses for cases where the
reliability of structures may change over time, see e.g. Lin (1967), Ferry-Borges and
Castanheta (1971) and Vanmarcke (1983). The techniques developed for time-variant
reliability analysis have been widely applied to examine e.g. the reliability of
deteriorating structures and the dynamic response of structures in a probabilistic
manner. However, the techniques practically applicable for these analyses are highly
-21-
Introduction
dependent on the nature of the stochastic processes that characterize the resistances of
structures and the loads on the structures.
The structural reliability theory has also been extended to investigate the reliability of
structural systems. Earlier contributions to this extension primarily focus on the
development of algorithms for evaluating the probability of system failure defined by a
set of limit state functions, see e.g. Hohenbichler and Rackwitz (1982), Der Kiureghian
and Moghtaderi-Zadeh (1982), Ditlevsen and Bjerager (1986). Later, based on these
earlier contributions, more systematic and realistic approaches have been developed
for evaluating the reliability of structural systems. These approaches include the
consideration of the statistical dependence of the performance of structural system
components, e.g. Straub and Der Kiureghian (2008), Song and Kang (2008) and Der
Kiureghian and Ditlevsen (2008).
Today, some generic software tools for the reliability analysis of structures and
structural systems are available, e.g. STRUREL/COMREL (RCP GmbH) and
CalREL/FERUM (Der Kiureghian et al. (2006)).
The probability-based concept for the evaluation of structural performance has been
applied to the design optimization of structures within the framework of life-cycle cost
analysis. Therein, the optimal design is obtained by minimizing the sum of the initial
cost and the expected future costs due to possible failures. This life-cycle cost
optimization concept was first introduced by Rosenblueth and Mendoza (1971) in civil
engineering. At the same time, Bayesian decision theory was developed, see e.g. Raiffa
and Schlaifer (1961), Lindley (1965) in general and Benjamin and Cornell (1970) for
the application to civil engineering in particular. Later, the life-cycle cost optimization
concept was formally integrated into the framework of Bayesian decision theory.
Presently, the life-cycle cost optimization concept and Bayesian decision theory are
widely accepted and employed as the guiding philosophical principles in a variety of
engineering decision problems. The most important and successful applications of the
concept and the theory in civil engineering include: risk-based inspection planning e.g.
Tang (1973), Thoft-Christensen and Sørensen (1987), Faber et al. (2000) and Straub
(2004); reassessment of existing structures, e.g. JCSS (2001a); code making, e.g. JCSS
(2001b) and Rackwitz (2000).
Recently, the life-cycle cost optimization concept has been applied in the context of
sustainable societal development. However, most of these applications do not
explicitly consider intergenerational aspects; the utility function assumed in these
applications corresponds to the utility of one representative individual who is assumed
to live for an infinite time. The exception is Rackwitz et al. (2005), who consistently
consider the intergenerational aspect and apply discounting accordingly for the
marginal cost-benefit analysis of individual civil infrastructure projects. However, no
-22-
Introduction
The assessment of the social return rate is mostly made by relying on statistical
analysis techniques, especially regression analysis, see e.g. Chapters 11 and 12 in
Barro and Sala-i-Martin (2004). One of the problems of standard regression analysis is
that it is difficult to identify the causality between economic growth and infrastructure
investment; whether economic growth demands more infrastructure capital, or whether
increased infrastructure capital leads to an increase of economic output, see e.g.
Duffy-Deno and Eberts (1991) and Canning and Bennathan (2000). In order to avoid
the causality problem, several techniques have been developed, e.g. Engle and Granger
(1987) and Canning (1999), and applied to the estimation of the social return rate.
-23-
Introduction
Using these techniques, Canning and Bennathan (2000) show that investment in civil
infrastructure can result in an increase of economic output.
The results of these assessments on the productivity of civil infrastructure are useful
not only in discussing the effectiveness of investment in civil infrastructure, but also
serve as building blocks of economic models that represent the productivity of the civil
infrastructure.
The development of economic models for the economic role of civil infrastructure
capital is often based on the growth theory. The growth theory aims, in general, at
describing the long-term development of the economy in which different stakeholders,
e.g. households, firms and governments, maximize their own objective functions. The
original work on the growth theory is by Ramsey (1928). It investigates the optimal
saving rate of households to achieve their maximum utility in an infinite time horizon.
Today the theory presented therein forms the fundamental basis for a variety of
economic theories, ranging from consumption theory, asset pricing and business-cycle
theory (Barro and Sala-i-Martin (2004)). This work was later refined by Cass (1965)
and Koopmans (1965). Meanwhile, Solow (1956) and Swan (1956) propose a model
known today as the Solow-Swan model, which employs the neoclassical form of
production function and the assumption that saving rate is constant and exogenously
given. These conditions result in a very simple representation of the general
equilibrium of the economy. For this reason, the Solow-Swan model is widely used, in
spite of claims that the assumptions are not realistic and consistent with actual
observations.
Whereas these classical models involve labor and (aggregated) capital as factors of
production, modern models have been proposed that explicitly incorporate specific
factors of production, e.g. technology (e.g. Arrow (1962)) and natural resources (e.g.
Stiglitz (1974), Dasgupta and Heal (1974) and Solow (1974)). More recently, so-called
endogenous models have been developed, which enable the long-term growth of the
economy to be described without relying on exogenous growth factors (e.g. Romer
(1986) and Lucas (1988)). Today, both these modern and classical models are widely
applied as tools to investigate the sustainability of the economy, see e.g. Pezzey and
Withagen (1998), Krautkraemer (1999) and Valente (2005).
Within the framework of the growth theory, several directions have been proposed to
incorporate civil infrastructure capital in economic models as one of the production
factors. For instance, Glomm and Ravikumar (1994) implement civil infrastructure
capital into the production function of private firms as an external input. Duggal et al.
(1999) incorporate civil infrastructure capital in the production function as part of the
technological constraints. These production functions can then be employed to discuss
sustainable policies for investment in civil infrastructure and sustainability of the
economy.
-24-
Introduction
However, most of the economic models that incorporate civil infrastructure capitals
assume that the deterioration rate of the infrastructure capital is exogenously given and
constant; the deterioration rate is not considered as a variable. This means that the
average reliability of the infrastructure remains constant over the entire time period,
being independent of the growing economic states – the reliability remains the same
when the economy is in a poor state and in a richer state. However, this is not realistic
since the deterioration rate of infrastructure can be dynamically controlled by means of
the design and maintenance policies on civil infrastructure. There are only a few
research studies available that consider the deterioration rate as a variable. Rioja
(2003) proposes a dynamic general equilibrium model that explicitly considers
investment into maintenance work of civil infrastructure, thereby incorporating the
effect of the maintenance works on the deterioration rate of infrastructure. This model
is extended by Kalaitzidakis and Kalyvitis (2004), which endogenizes the decision of
budget allocation into both investment in the construction of new infrastructure and
investment in maintenance work on existing infrastructure.
The use of these models is a promising way to investigate the optimal reliability level
of infrastructure as a function of economic growth, thereby to identify the optimal
policies for the design and maintenance work on civil infrastructure in a
macroeconomic context. However, the assumptions made in these models are too
simplistic in regard to the relations between the amount of investment in maintenance
work and the deterioration rate; for instance, the investment in maintenance work at
one particular time influence the deterioration rate at the same time but not for the
deterioration rate in the future. Realistic models and a methodology that can
incorporate engineering knowledge into the models are still missing, and thus need to
be developed.
-25-
Introduction
capital, human capital) and natural capital (e.g. non-renewable resources) is the focus
of discussion, see Chapter 4 in Perman et al. (2003). Therein a distinction of the
concept of sustainability is made; weak sustainability and strong sustainability. The
perspective of weak sustainability is that man-made capital can substitute natural
capital, thus a certain production level can be kept by maintaining the level of the sum
of both types of capital. On the other hand, the perspective of strong sustainability is
that the level of production can be sustained only if natural capital is provided at a
certain level. If the strong sustainability perspective is taken, the level of production
can be maintained in an infinite time horizon only by exploiting natural capitals
indefinitely, which seems unfeasible at least for non-renewable resources. In contrast,
based on the weak sustainability perspective, the (feasible) conditions under which a
certain level of production and thus consumption can be maintained are derived by
Hartwick (1977) and Hartwick (1978).
Today, no general agreement on the definition and criteria for sustainability is made; a
steady increase of consumption or utility over time is often considered as the criterion
for sustainability, see e.g. Withagen (1996), in which relevant works that employ this
definition are listed. However, this is not a unique criterion, see e.g. Pezzey (1992) and
-26-
Introduction
Pezzey (1997). Some concepts which are widely used and discussed in economics are
stated as6:
Chapter 3 (Paper II) proposes a method for optimizing decisions for complex
engineering systems under constraints. Constrained optimization problems are often
encountered in engineering decision analysis, especially where societal preferences
-27-
Introduction
must be taken into account. For instance, transport networks have to be designed and
maintained by satisfying requirements on human safety over their entire operation
periods. An engineering facility may have to satisfy the regulations imposed for
environmental protection, e.g. in terms of maximum leakage of harmful biochemical
agents. The proposed method employs Bayesian probabilistic networks for the
probabilistic representation of the structural performance of complex systems, and
generic algorithms for solving constrained optimization problems. Since these
techniques are commonly available in terms of software tools, the proposed method is
directly facilitated in practical decision situations.
-28-
Introduction
Chapter 6 (Paper V) proposes an approach for how the reliability of infrastructure can
be treated in the context of macroeconomics. The proposed approach consists of two
steps: (1) defining infrastructure failure by limit state representations; (2)
implementing the reliability concept into economic models. The first step takes basis in
the structural reliability theory and the second step employs the economic growth
theory. Thus, the proposed approach can incorporate knowledge of civil engineering
concerning structural performance into economic models. In order to show how the
proposed approach can be applied an illustrative example is provided. Therein, a
simplistic economy is assumed, which solely depends on civil infrastructure as the
production factor and is subject to natural hazards, and the economic growth path is
examined as a function of the policy on the design and maintenance of civil
infrastructure.
In Chapter 7, the proposed approach is applied to another simplistic economy, and the
steady and transition states of the economy are examined as a function of the policy on
the design and maintenance of civil infrastructure. By analyzing the steady state a
decision principle is derived, which differs from the decision principle adopted in the
life cycle cost optimization concept. Furthermore, it is shown that by analyzing the
transition state the optimal policy at each point in time depends on the current
economic output level.
-29-
Probabilistic assessment of extreme events subject to epistemic uncertainties (Paper I)
Kazuyoshi Nishijima
Marc A. Maes
-30-
Abstract
Over the years the modeling and treatment of aleatory and epistemic uncertainties in
probabilistic assessments has repeatedly been an issue of discussion and also some
controversy. The philosophical and mathematical aspects may be said to be well
appreciated; however, there are cases in practice where principles seem to be violated
and frequently the effects of the epistemic uncertainty are treated inconsistently in the
probabilistic modeling. The present paper first reviews the general principles for the
modeling and treatment of uncertain characteristics subject to both aleatory and
epistemic uncertainties. Thereafter, the general principles are applied considering three
examples concerning the probabilistic modeling of extreme events; 1) the n-year
maximum distribution, 2) the corresponding return period and 3) the exceedance
probability in hazard analysis. Through these examples typical inconsistencies made in
practical probabilistic assessments are pointed out. The results from the examples are
interpreted and discussed from a structural design perspective and from a rational
risk-based decision perspective. Finally, a practical solution to avoid the
inconsistencies is suggested emphasizing the analogy of the analysis of extreme events
with the analysis of portfolios.
2.1. Introduction
The probabilistic modeling of events and not least extreme events forms a crucial
corner stone in risk based decision making concerning the design, assessment,
inspection and maintenance planning for engineering structures and facilities. The
assessment of probabilities can be performed based on probabilistic models that
describe the events of interest; extreme wave heights, current and wind velocities, etc.
In general, such probabilistic models are established through the joint consideration of
knowledge, experience and observations; combining statistical assessments with
subjective judgments. Consequently, very often the resulting probabilistic models are
associated with not only aleatory uncertainties, i.e. the inherent natural variability
associated with the phenomenon of interest but moreover with significant epistemic
uncertainties. It is of utmost importance that both of these two contributions to
uncertainty are treated correctly in the probabilistic assessments.
In the literature a number of discussions have been made on how uncertainties arising
from different sources may be categorized and how these different categories should
and/or can be considered in probabilistic risk assessment and risk-based decision
making, e.g. Raiffa and Schlaifer (1961), Pate-Cornell (1996), Faber (2003), Wen et al.
(2003), Faber and Maes (2005) and Der Kiureghian and Ditlevsen (2007). It can be
said that the relevance of epistemic uncertainties in risk assessments is well recognized
and also the general principles for modeling and assessing the relevant probabilistic
characteristics seem well understood. However, there are still several situations where
the general principles are violated in practice. The present paper considers the
treatment of aleatory and epistemic uncertainties especially in the probabilistic
-31-
The present paper first reviews the general principles for the probabilistic modeling of
uncertain characteristics subject to both aleatory and epistemic uncertainties.
Thereafter, three examples are considered pointing out in parallel the typical
inconsistent assessments often made in practice and the results of a correct assessment
following the general principles. Finally, a practical procedure to avoid inconsistent
probabilistic assessments of extreme events is presented based on an analogy to the
probabilistic modeling and treatment of portfolio loss assessments.
-32-
The pure statistical approach has been preferred by classical statisticians since the
results of such models are coherent with the frequentistic interpretation of
probabilities; there is a one to one correspondence between observations and model
predictions. Typically the statistical models are formulated as annual extreme value
distributions, and the extreme value theory thus provides the justification for assuming
either one of the three extreme value distributions or the generalized extreme value
distribution, e.g. Leadbetter et al. (1983) and Coles (2001). This approach may be a
reasonable solution for cases where the detailed physical mechanisms that govern the
hazard events are not well understood or too complex to represent in a practically
manageable effort. However, this approach also has drawbacks; 1) direct observations
of extreme events are by definition rare why the parameter estimation of the
distributions generally involves large statistical uncertainties (epistemic uncertainty),
and 2) the potentially available scientific knowledge and/or engineering experiences
cannot be included in the modeling. To overcome these drawbacks, engineering
probabilistic approaches have been developed for different types of hazards, which
enables one to integrate into the hazard analysis the available knowledge and
engineering experience. For instance, in Nishijima and Faber (2007b) hurricane
simulation techniques have been developed for wind hazard analysis integrating
several probabilistic model components each of which represents individual parts of
the involved physical mechanisms, e.g. the transition of hurricanes and development of
the pressure fields.
-33-
In the pure statistical modeling approach the distinction between epistemic uncertainty
and aleatory uncertainty is relatively clear, since the epistemic uncertainty is primarily
statistical uncertainty that is involved in the parameter estimation of the distributions
(including uncertainty on the choice of distribution family). The epistemic uncertainty
can be integrated into the probabilistic assessments within the Bayesian statistical
framework, e.g. Coles et al. (2003), although in practice it is often neglected. On the
other hand, in the engineering approach taking basis in the Bayesian framework the
epistemic uncertainties are associated with each individual probabilistic model
components that jointly comprise the probabilistic assessment model in terms of model
uncertainty and statistical uncertainty.
-34-
the random phenomena of the real world, why epistemic uncertainty is introduced to
account for such model uncertainties.
Figure 2.1 illustrates the roles of aleatory and epistemic uncertainties in probabilistic
modeling. Probabilistic characteristics of extreme events are first assessed conditional
on the epistemic uncertainty θ then integrated over possible realizations of epistemic
random variables Θ . The epistemic random variables Θ should be interpreted
heuristically; the epistemic random variables represents not only the uncertainties of
the parameters of distributions but also the likelihood or degree of belief associated
-35-
2.3. Examples
Three examples are now considered in order to illustrate how the general principle
introduced in the previous section might be utilized in practice. Through the examples,
pointing out the typical inconsistent probabilistic assessments of characteristics of
extreme events which are commonly utilized in engineering design and assessment, the
probabilistic models which follow from the application of the general principle are also
provided. The discussion on the implications of the results is provided subsequently in
Section 2.4.
g ( X) = I ⎡ max { X i } ≤ x ⎤ (2.2)
⎢⎣ i =1,2,..,n ⎥⎦
where I [⋅] is an indicator function that returns the value one if the condition in the
bracket is satisfied and zero otherwise and X i is the i th year maximum. By
substituting Equation (2.2) into Equation (2.1) the cumulative distribution function is
obtained as:
FX ,n ( x) = ∫ { F ( x | θ )} p (θ ) dθ
n
(2.3)
-36-
In practice deviations from the general principle are observed. One example for this
concerns the utilization of probabilistic hazard maps or load recommendations for risk
management purposes. Hazard maps usually provide characteristic values, e.g. quantile
values including the effect of the epistemic uncertainty, e.g. in the form of
conservatively assessed fractile values or median values of the fractile values relative
to the epistemic uncertainties. Based on these characteristic values a distribution
function of annual maxima F ( x) is established and based on this finally the
distribution of the n-year maximum distribution is calculated as:
{ }
n
FX ,n* ( x) = F ( x) (2.4)
F ( x ) = ∫ F ( x | θ ) p (θ ) dθ (2.5)
Obviously, FX ,n ( x) and FX ,n* ( x) are in general not identical. Furthermore, for n > 1
it can be shown by applying Jensen’s inequality that
FX ,n ( x) = EΘ ⎡{ F ( x | Θ)} ⎤
n
⎣ ⎦
(2.6)
{
≥ { EΘ [ F ( x | Θ) ]} = F ( x) }
n n
*
= FX ,n ( x)
The equality holds if there is no epistemic uncertainty. Thus, for any given quantile the
corresponding value is larger when FX ,n* ( x) is employed instead of FX ,n ( x) ; n-year
maximum events are overestimated when FX ,n* ( x) is employed.
-37-
where θ represents the epistemic uncertainty and α = 0.257 (this corresponds to the
standard deviation of 5 [m/s] given θ ). The epistemic uncertainty represented by the
random variable Θ is assumed to follow the Normal distribution with mean and
standard deviation being equal to 20 [m/s] and 5 [m/s] respectively. Figure 2.2 shows
the assessed probability density functions of the 50-year maximum in accordance with
Equations (2.3) (denoted as “consistent”) and (2.4) (denote as “inconsistent”)
respectively. It is seen that the probability density function looks significantly different
-38-
and that the mean value of the 50-year maximum wind speed is overestimated when it
is evaluated using Equation (2.4).
Figure 2.3 shows the corresponding exceedance probabilities of the 50-year maximum
wind speed. Whereas the (inconsistent) Equation (2.4) overestimates the exceedance
probability at the range between 10−1 and 1, the tendency diminishes for the range of
lower probabilities. These results should be appreciated depending on the context as
will be discussed further in the subsequent section.
The return period may be defined as the expected value of the arrival time of the event
of interest, see e.g. Benjamin and Cornell (1970). Assuming that the probability of
occurrence of an event in a Bernoulli sequence of trials is p , then the arrival time
follows the geometric distribution. The expected value of the arrival time E[T ] is
then calculated as 1/ p . When the event is characterized by its intensity X , e.g. a
given wind speed or a given precipitation, the probability p is represented by the
cumulative distribution function F ( x ) of the maximum within a given period (e.g.
one year). Thus, the return period is a function of the intensity x and may be written
as:
1
E[T ( x )]* = (2.8)
1 − F ( x)
⎡ 1 ⎤ p(θ )
E[T ( x)] = EΘ ⎢ ⎥ =∫ dθ (2.9)
⎣ 1 − F ( x | Θ) ⎦ 1− F (x | θ )
where F ( x | θ ) is the conditional cumulative distribution function on the epistemic
uncertainty θ and p(θ ) is the probability density function of θ . This formulation
is coherent with the general principle given in Equation (2.1).
Probabilistic engineering models are often employed where the cumulative distribution
function of the maximum intensity within a given reference period is established by
combination of probabilistic models that represent the natural random nature (aleatory
-39-
F ( x) = ∫ F ( x | θ ) p (θ ) dθ (2.10)
The return period is often assessed by combining Equations (2.8) and (2.10) as:
1
E[T ( x )]** = (2.11)
1 − F ( x)
This is obviously not the same as Equation (2.9) and it can be shown by applying
Jensen’s inequality that:
⎡ 1 ⎤ 1
E[T ( x)] = EΘ ⎢ ⎥ ≥
⎣1 − F ( x | Θ) ⎦ 1 − EΘ [ F ( x | Θ) ] (2.12)
1
= = E[T ( x)]**
1 − F ( x)
The equality in Equation (2.12) holds if there is no epistemic uncertainty; in that case
E[T ( x)] and E[T ( x)]** coincide. From this inequality, it can be said that the return
period assessed by Equation (2.11) underestimates the expected arrival time.
-40-
In Figure 2.4 the results of a probabilistic assessment of the relation between extreme
wind speeds and corresponding return periods are shown. Based on the same
assumption as in the first example it is seen that the application of Equation (2.9) and
(2.11) respectively result in different return periods. For instance, based on the
application of Equation (2.11) a wind speed of 40 m/s corresponds to a return period of
80 years, whereas the correct return period using Equation (2.9) is in fact 400 years.
Seismic hazard analysis aims at assessing the probability of exceedance of any given
seismic hazard intensity x for a specified reference period, e.g. one year, (seismic
hazard curve). In the assessment of this probability several assumptions and
probabilistic models are required; e.g. the occurrence of earthquake in the seismic
zone, the magnitude of the earthquake, the distance between the epicenter and the site
for which the hazard analysis is performed and the so-called attenuation law that
relates the relevant parameters and the seismic hazard intensity. Essentially such
assumptions and probabilistic models involve epistemic uncertainty due to the
imperfection of the postulated models and scarce data available for estimating
parameters in the models. Whereas the presence of epistemic uncertainty in general is
appreciated and some epistemic uncertainties are considered correctly, other epistemic
uncertainties are often inconsistently considered. Examples of cases where epistemic
uncertainties are consistently accounted for include the epistemic uncertainty
associated with the choice of attenuation law and the choice of the range of the
possible magnitudes. For instance, a typical attenuation law is represented in the form
of X = ε ⋅ g ( a, b, c,...) , where X denotes the hazard index, e.g. peak ground motion,
and a, b, c,... represent the relevant parameters in the attenuation law, e.g. magnitude
and distance from the epicenter, and ε represents the residual term. Different
attenuation laws are proposed by different experts. These differences are often ascribed
to expert judgments, for each of which a probability is assigned in order to incorporate
the different expert judgments into one unified seismic hazard curve. Such
incorporations are consistent with Equation (2.1), since the inner expectation in
Equation (2.1) corresponds to each hazard curve conditional on each expert judgment
and the outer expectations correspond to the uncertainties associated with the expert
judgments. An example of the inconsistent consideration of the epistemic uncertainties
corresponds to the residual term of the attenuation law. The random variable ε can be
-41-
Here it is assumed that the occurrence of an earthquake follows a Poisson process with
intensity ν . However, in some practices the probability is calculated as:
where the conditional probability of the seismic hazard intensity given the occurrence
of an earthquake is first marginalized by integrating over the epistemic uncertainty θ ,
thereafter the assumption of the Poisson process is applied to calculate the probability
of exceedance x ; Equation (2.14) is inconsistent with the general principle given by
Equation (2.1). Generally, Equation (2.14) does not provide the same value as Equation
(2.13), although if ν is small enough both equations can be approximated as
ν ∫ q ( x | θ ) p (θ )dθ . In this sense, the evaluation of the probability with Equation (2.14)
can be seen as a numerical approximation and this may justify the use of Equation
(2.14) in practice. Furthermore, by applying Jensen’s inequality, it can be shown:
A similar discussion may apply to cases where non-Poisson processes are assumed for
the occurrence of earthquake and for cases where two or more seismic zones are
considered.
2.4. Discussion
Three examples considering the n-year maximum distribution, the return period and
the exceedance probability respectively have been considered. For each of these
examples typical inconsistent treatments of epistemic uncertainties found to occur in
-42-
practical applications have been considered and analyzed. The results from these
examples should be interpreted corresponding to the contexts: structural design in
practice and optimal decision making. In the context of structural design in practice the
results of the examples may be understood such that the inconsistent probabilistic
assessments often made in practice are conservative and hence can be justified.
Furthermore, the inconsistent probabilistic assessments are in general less complicated
compared with the consistent assessments, since they allow for incorporation of the
epistemic uncertainties at earlier stages of the assessments. However, in the context of
optimal risk-based decision making the inconsistent probabilistic assessment should be
circumvented as it leads to sub-optimal decisions.
The first example reveals that the information provided in typical hazard maps and
load recommendations are not sufficient to use directly in the context of optimal
decision making, since they do not differentiate the sources of uncertainties; hence the
distributions of maximum values for a given reference period cannot be correctly
established. The second example shows that the return period that provides the basis
for structural design as well as for validation of the established probabilistic models
based on observations does not correspond to the expected value of the arrival time.
Therefore, the return period assessed by Equation (2.11) should not be used for these
purposes. The third example justifies the seismic hazard analyses presently made in
practice in a numerical sense, although it is important to realize that the analyses are
not conceptually consistent with the general principle for the probabilistic assessments.
-43-
representation theory e.g. Jensen (2001). It is worthwhile mentioning that the random
variables X i can be seen as the components of a temporarily distributed portfolio
with the analogy of a spatially distributed portfolio – the graphical representation in
Figure 2.5 can be also understood to represent a spatially distributed portfolio, the
component of which are subject to epistemic uncertainty, see Faber et al. (2007a).
Then, it is obvious that the probabilistic characteristics of identical components X i
are subject to epistemic uncertainty Θ that simultaneous affects all the components.
In this regard the distinction between aleatory and epistemic uncertainty might be
useful simply to make clear which variables affects other variables. For completeness
the incorporation of epistemic uncertainty in seismic hazard analysis as discussed in
the third example is shown in detail in the Appendix.
2.5. Conclusion
The present paper first provides general principles on how aleatory and epistemic
uncertainties should be considered in the probabilistic modeling and assessments for
risk based decision making. Focusing on the probabilistic modeling of extreme events,
several inconsistencies often made in practical probabilistic assessments for extreme
events are pointed out; i.e. the n-year maximum distribution, the return period and the
exceedance probability in hazard analysis. For the considered examples it is shown that
such inconsistent probabilistic assessments overestimate the probabilistic
characteristics of the extreme events. From the perspective of structural design it can
be seen as a conservative assessment and thus may be justified. However, from the
perspective of optimal decision making the inconsistent assessments lead to
sub-optimal decisions and should thus be avoided.
2.6. Appendix
The exceedance probability is calculated assuming that the occurrence of earthquakes
over time follows a Poisson process as:
∞
P[ X > x] = ∑ P ⎡ N = k ∩ max X i > x ⎤
k =1
⎢⎣ i =1,2,.., k ⎥⎦
∞
(2.16)
= ∑ P ⎡ max X i > x | N = k ⎤ P [ N = k ]
k =1
⎢⎣ i =1,2,..,k ⎥⎦
-44-
∞
νk
P[ X > x] = ∑ ⎡⎣1 − (1 − q ( x))k ⎤⎦ e−ν
k =1 k! (2.17)
= 1 − exp [ −ν q ( x)]
where q ( x) is the probability that the intensity exceeds x given the occurrence of
an earthquake and ν is the occurrence rate. This is the same form as Equation (2.14)
using that q ( x) = ∫ q ( x | θ ) p (θ ) dθ . However, when epistemic uncertainties which
affect all X i are present, the calculation should proceed as:
∞
P[ X > x] = ∫ ∑ P ⎡ max X i > x | N = k , θ ⎤ P [ N = k ] p (θ )dθ
k =1 ⎣⎢ i =1,2,.., k ⎦⎥
ν k e −ν
( )
∞
= ∫ ∑ 1 − (1 − q ( x | θ ) )
k
p (θ )dθ (2.18)
k =1 k!
= ∫ (1 − exp [ −ν q ( x | θ ) ]) p (θ )dθ
which is equivalent to Equation (2.13). In this way, the fact that the epistemic
uncertainty affects the ground motion intensities for all earthquakes over time plays a
crucial role.
-45-
Kazuyoshi Nishijima
Marc A. Maes
Jean Goyet
Bureau Veritas, Marine Division, Research Department, 17 bis Place des Reflets, La
Defense 2, 92400 Courbevoie, France.
-46-
Constrained optimization of component reliabilities in complex systems (Paper II)
Abstract
The present paper proposes an approach for identifying target reliabilities for
components of complex engineered systems with given acceptance criteria for system
performance. The target reliabilities for components must be consistent in the sense
that the system performance resulting from the choice of the components’reliabilities
satisfy the given acceptance criteria, and should be optimal in the sense that the
expected utility associated with the system is maximized. To this end, the present paper
first describes how complex engineered systems may be modelled hierarchically by
use of Bayesian probabilistic networks and influence diagrams. They serve as
functions relating the reliabilities of the individual components of the system to the
overall system performance. Thereafter, a constrained optimization problem is
formulated for the optimization of the component reliabilities. In this optimization
problem the acceptance criteria for the system performance define the constraints, and
the expected utility from the system is considered as the objective function. Two
examples are shown: (1) optimization of design of bridges in a transportation network
subjected to an earthquake, and (2) optimization of target reliabilities of welded joints
in a ship hull structure subjected to fatigue deterioration in the context of maintenance
planning.
Keywords
Constrained optimization, complex system, acceptance criteria, Bayesian probabilistic
network, influence diagram.
3.1. Introduction
Typically engineered systems are complex systems comprised of geographically
distributed and/or functionally interrelated components, which through their
connections with other components provide the desired functionality of the system
expressed in terms of one or more attributes. This perspective may indeed be useful for
interpreting and modelling a broad range of engineered systems ranging from
construction processes over water and electricity distribution systems to structural
systems. One of the characteristics of engineered systems is that, while the individual
components may be standardized in regard to quality and reliability, the systems
themselves often cannot be standardized due to their uniqueness. The performance of
the systems will depend on the way their components are interconnected to provide the
functionalities of the systems as well as on the choice of reliabilities of their
components. Thus, the design and maintenance of such systems effectively concern the
requirements to the reliability of their components, which can be translated from given
requirements to the attributes of the performance of systems in accordance with the
way the components are connected.
-47-
Constrained optimization of component reliabilities in complex systems (Paper II)
Due to the complex nature of the problem, modelling and optimization of such systems
generally require that different levels of analyses provided by different experts and
supported by data are integrated interdisciplinary. Taking basis in engineered
structures, at component level physical failure mechanisms may be analyzed, such as
yielding, fracture and corrosion. The component failure modes now constitute the
building stones for the development of systems failure modes including the formation
of failure modes for sequences of sub-systems, for which the corresponding
consequences may be assessed. An optimization of the target reliability for components
of a given system, i.e., a system with a given interrelation between its components,
must take basis in such analyses. Seen in this light, it is useful to hierarchically
establish models for complex engineered systems which accommodate for the
integration of the different levels of analyses. Such a hierarchical approach may also
prove to be beneficial as a mean of communication between professionals representing
the expertise required for the modelling of the performance of the different types of
components, sub-systems and systems.
The present paper addresses the problem outlined in the foregoing in the context of a
hierarchical system modelling developed for risk assessment of engineered systems by
the Joint Committee on Structural Safety (Faber et al. (2007b)), where taking basis in
structural systems a framework is formalized in regard to how the hierarchical system
model can be established and then applied to optimize the reliability for components of
structures based on specified requirements to the acceptable risks for the considered
structural system.
The present paper first provides a short summary of available techniques on the
modelling of complex systems. Following this, a general approach for the optimization
of the reliability of system components with given criteria to the acceptable system risk
is proposed. The proposed approach is composed of three steps; (1) adaptation of
Bayesian probabilistic network and influence diagram representation for hierarchical
system modelling, (2) linking of acceptance criteria for system level to component
level through the Bayesian probabilistic networks and the influence diagrams, and (3)
optimizing the target reliabilities of individual components. The original contribution
of the presented approach is the effective use of the commonly available techniques,
i.e. Bayesian probabilistic networks, influence diagrams and generic algorithms for
constrained optimization problems. The approach suggested allows for the assessment
of optimal target reliabilities for the individual components of systems for which the
risk acceptance criteria are specified in regard to the system performance. The
proposed approach is most useful in cases where (1) the components that constitute the
system or the sub-system can be categorized into groups with identical probabilistic
characteristics and/or (2) the components are hierarchically related. Finally, two
illustrative examples are provided. The first example addresses the design of bridges in
a transportation network subject to earthquake hazards. Through this example the
individual steps of the proposed approach are explained. The second example
-48-
Constrained optimization of component reliabilities in complex systems (Paper II)
considers a floating production storage and offloading unit (FPSO), which constitutes a
typical complex engineered system. In this example, the target reliabilities of welded
joints subject to fatigue deterioration in the framework of inspection and maintenance
planning are optimized with given acceptance criteria for the performance of the ship
hull structure as a whole.
-49-
Constrained optimization of component reliabilities in complex systems (Paper II)
Another approach for the probabilistic modelling and analysis of complex systems is
proposed by Der Kiureghian and Song (2008). In this approach, the probability of an
event of interest (related to the system performance) is formulated as a sum of the
probabilities of the mutually exclusive combinations of the component states that
govern this event. Upper and lower probability bounds on the system performance are
calculated based on an out-crossing formulation and using linear programming
techniques. Moreover, it is shown in Der Kiureghian and Song (2008) that by
aggregating several components as "super-components" and applying the linear
programming method in a hierarchical way, the approach provides reasonable
probability bounds on the system performance with a manageable computational effort.
However, the applied scheme for component aggregation affects the efficiency of the
computation and the width of the obtained probability bounds. An optimization of the
aggregation scheme in principle requires trial and error, although general guidelines
are provided in Der Kiureghian and Song (2008).
-50-
Constrained optimization of component reliabilities in complex systems (Paper II)
levels in whole systems are first established based on scientific knowledge without
specifying the probabilistic characteristics of the variables or assuming weak prior
distributions. The parameters of the variables are then estimated or updated using
observed data. Other applications of Bayesian hierarchical models can be found in the
area of pattern categorization/recognition, see e.g. Li and Pietro (2005) and George and
Hawkins (2005). Due to the characteristics of the applications of the models for the
pattern categorizations or recognitions, it is important that these models allow for
promptly updating the parameters in the models for a broader range of objects. To this
end, flexible representations and systematic learning algorithms which the BPN
approach provides are extensively utilized. The Bayesian hierarchical approach has
been applied also for engineered complex systems. Among others, Johnson et al.
(2002) apply the hierarchical model for estimating the reliability of missile systems,
where the fault tree analysis is extended using the Bayesian approach to accommodate
the integration of available expert knowledge and data.
Emphasizing the difference of the use of the Bayesian hierarchical models, the present
paper appreciates the fact that input-output relations of phenomena in engineering at
different levels are often quantitatively available in probabilistic terms. For instance,
given the geometry and material properties of an engineered component, it is possible
to calculate the probability of failure of the component using data and by physical
modeling and analysis techniques, e.g. finite element methods. Fatigue deterioration
can be probabilistically modelled for given environments, using physical models and
data, see Straub (2004). As the events of interest such as component failure and fatigue
degradation are subject to given circumstances, which themselves might be associated
with uncertainty, the probabilities of the events are appropriately represented in terms
of conditional probabilities. Therefore, in the context of modelling of complex
engineered systems, the main focus is how the system can be hierarchically modelled
using these conditional probabilities of components at different levels.
As observed in the above the applications of Bayesian hierarchical models are rather
diverse. However, all Bayesian hierarchical models utilize generic algorithms
developed for estimating parameters and/or obtaining conditional or posterior
distributions. The algorithms themselves are indifferent to the contexts where the
Bayesian hierarchical models are employed.
-51-
Constrained optimization of component reliabilities in complex systems (Paper II)
-52-
Constrained optimization of component reliabilities in complex systems (Paper II)
operation of the system, or more generally by maximizing the service life expected
utility.
Let A and E = ( E1 , E2 ,..., En ) denote the sets of possible actions and possible states
of a system respectively. The combination of a ∈ A and e ∈ E specifies the joint
probability conditional on the action P[e | a ] and the consequences
C(a, e) = (C1 (a, e), C2 (a, e),..., Cm (a, e)) . In general these quantities are the functions
describing how the components and the sub-systems in the system are interconnected.
However, in the following it is assumed that the interconnectivity is fixed. Note that
the consequences C( a, e) may be a vector when two or more attributes of the system
performance are considered, e.g. financial cost, fatalities and damages to the qualities
of the environment. It is assumed that the consequences C( a, e) can be represented as
an attribute-wise sum of the consequences C A (a) associated with action a and the
consequences CE (e) associated with event e , namely
-53-
Constrained optimization of component reliabilities in complex systems (Paper II)
so-called chance nodes and edges that logically link the nodes, and conditional
probability assignments, see Figure 3.2 for example, and see e.g. Jensen (2001) for
general introduction. An influence diagram (ID) is an extension of a Bayesian
probabilistic network that includes so-called decision nodes and utility nodes in a
graph in addition to chance nodes. Using the chain rule for Bayesian probabilistic
networks (Jensen (2001)), the joint probability P ( E | a ) can be decomposed as
P ( E | a ) = ∏ P ( Ei | pa ( Ei ), a ) (3.2)
i
where pa( Ei ) is the parent set of Ei . From Equation (3.2) it can be seen that the
joint probability P ( E | a ) can be built up by conditional probabilities. Any marginal
probabilities of the states of the subset of E can be efficiently calculated from the
joint probability P ( E | a ) with generic algorithms and software tools commonly
available, see the appendix of Korb and Nicholson (2004). For the BPN shown in
Figure 3.2, the parents of E3 are the nodes E1 and E2 , and the node E2 is a
function of action A . The joint probability is then written as
P( E | a) = P( E3 | E1 , E2 ) P( E1 ) P( E2 | a) (3.3)
Each term in Equation (3.3) thus the joint probability is fully characterized by the
conditional probability tables shown in Figure 3.2.
Let F(C, P) = ( F1 (C, P), F2 (C, P),..., Fl (C, P)) denote a vector function of C(a, e)
and P ( E | a ) . For instance, the expected total cost, may be one of the attribute of a
system performance to be considered, and is written as one element of F (C, P ) as
-54-
Constrained optimization of component reliabilities in complex systems (Paper II)
where Ci (⋅, ⋅) represent the cost. The probability that the damage to environmental
quality exceeds a given threshold cacc may be another element of F (C, P ) and is
written as
where C j (⋅, ⋅) represents the environmental damage and I [⋅] is the indicator
function, which returns unity if the condition in the bracket is satisfied and zero
otherwise. Such environmental damages may be represented e.g. in terms of release
volumes, the geographical release extent and/or temporal release extent of agents. The
conditional expected value of the number of fatalities given the state Em = em may be
other element of F (C, P ) and is written as
∑
e '∈E \ Em
Ck (a, (em , e ')) P((em , e ') | a )
Fk (C, P) = (3.6)
∑
e '∈E \ Em
P((em , e ') | a)
-55-
Constrained optimization of component reliabilities in complex systems (Paper II)
Since the functions Fi ( i = 1, 2,..., m ) are readily calculated, the problem is reduced to
a standard non-linear constrained optimization problem for which efficient algorithms
are available, see e.g. Press et al. (1988).
3.4. Example 1
This example considers the simple optimization of the design of bridges subject to
earthquake hazards. The aim of this example is to explain in detail how the proposed
approach may be applied in practical situations. The bridges b1 , b2 and b3
geographically connect the location a with c , and thus constitute the system
components in a transportation network system, see Figure 3.3. It is assumed that the
state of the system is fully described through the combinations of the states of the three
-56-
Constrained optimization of component reliabilities in complex systems (Paper II)
bridges, and hence, the failures of e.g. the road sections besides the bridges in the
network are not considered. The system failure is assumed to be defined as the joint
failures of all three bridges. The objective function to be minimized is the expected
discounted total cost, which consists of the initial cost and the expected cost associated
with the failures of bridges. The acceptance criteria are assumed to be given for (1) the
expected number of fatalities in the system given that an earthquake occurs as 10, and
(2) the conditional probability that the system fails given that an earthquake occurs as
1%. The life time considered in the design of the bridges is 100 years, and it is
assumed that an earthquake occurs at most once in the system's life time. The
discounting rate applied for evaluating the future costs is assumed equal to 3% per
annum.
The node "Time" specifies the probability of the yearly discretized time T when the
scenario eq1 occurs. T is assumed to follow a geometric distribution with an
occurrence probability for each year given as νΔt = 0.01 . The nodes "V1", "V2" and
"V3" represent the logarithms of the peak ground accelerations ( cm / s 2 ) at the
locations where the bridges b1 , b2 and b3 are to be built, and are assumed to follow
normal distributions given the scenario eq1 with the parameters shown in Table 3.1.
-57-
Constrained optimization of component reliabilities in complex systems (Paper II)
Figure 3.4. Classes of BPNs for Earthquake hazard (left) and for Bridge (right).
When the probabilistic characteristics are implemented into the conditional probability
table in BPNs they have to be discretized. The intervals and the upper and lower
bounds must be chosen carefully assuring the efficiency and accuracy of the
discretization. They are chosen in this example as shown in Table 3.1. Note that the
BPN in Figure 3.4 (left) assumes that "V1", "V2" and "V3" are conditionally
independent given the scenario. The nodes "Time", "V1", "V2" and "V3" (surrounded
by the bold line) are output nodes, and are connected to other nodes in the BPN for the
transportation network system, Figure 3.5.
-58-
Constrained optimization of component reliabilities in complex systems (Paper II)
The bridges are modelled in the Bridge class BPN as shown in Figure 3.4 (right). The
bridges b1 , b2 and b3 are assumed to be identically modelled through the Bridge
class BPN. However, the different probabilities in the input nodes "V", "X" and
"Theta2" (highlighted with bold dashed line) facilitate the differentiation between the
resistances of the bridges and the corresponding probabilities of failure. In the Bridge
class BPN, S denotes the load effect, which is represented by
S =V + A (3.10)
where A represents the logarithm of the soil amplification factor. A is assumed to
follow a normal distribution with the parameters given in Table 3.1. The resistance R
of the bridge is modelled as
R = X + Θ = X + (Θ1 + Θ2 ) (3.11)
where X specifies the design of the bridges and Θ represents the uncertainties
associated with the resistance of the bridge. Θ can be decomposed into two types of
uncertainties, Θ1 and Θ2 . Θ1 is the uncertainty associated with individual
realizations of bridges, and can be assumed independent between the different bridges,
whereas Θ2 denotes the common uncertainty that affects all realizations of bridges
thus introduces the statistical dependence. For example, uncertainty on material
geometry or uncertainties associated with construction work may belong to the former
type of uncertainty. Modelling and statistical uncertainties belong to the latter type of
-59-
Constrained optimization of component reliabilities in complex systems (Paper II)
-60-
Constrained optimization of component reliabilities in complex systems (Paper II)
State of Bridge
Bridge 1 NF F
Bridge 2 NF F NF F
Bridge 3 NF F NF F NF F NF F
Failure cost (Monetary
unit) 0 10 10 50 10 50 50 200
Fatality 0 10 10 20 10 20 20 30
(Failure costs are not discounted. F and NF are abbreviations for failure and no failure, respectively.)
3.4.2. Results
The expected discounted total costs, the expected number of fatalities and the
probabilities of system failure given that an earthquake occurs for the 27 possible
actions are calculated using the established IDs. The result is shown in Figure 3.7. At
the bottom of the figure the correspondence between the actions and the combinations
of the design alternatives for the three bridges is also shown. The optimal action
consistent with the two acceptance criteria regarding the expected number of fatalities
and conditional probability of system failure given the occurrence of an earthquake is
identified as action 25 (design alternative a3 for the bridges b1 and b2 , and design
alternative a1 for the bridge b3 ); action 17 results in the minimum expected
discounted total cost, but it does not satisfy the acceptance criteria. The strategy behind
action 25 may be interpreted as follows; considering the non-linear relation between
the number of failed bridges and the failure costs, a sound strategy may be to avoid, by
all means, the simultaneous failures of the three bridges in an economically efficient
way, which may be realized with higher reliabilities for one or two of the three bridges
and comparatively low reliability for the other bridge(s). Since the earthquake hazard
-61-
Constrained optimization of component reliabilities in complex systems (Paper II)
is smallest for bridge b1 , the highest reliability of the system can be realized most
efficiently through bridge b1 and be realized relatively efficiently for the bridge b2 ,
by adopting the design alternative a3 for the bridges b1 and b2 ; corresponding to
the highest design resistance in the three design alternatives. At the same time, by
accepting a relatively higher failure probability for bridge b3 , the expected discounted
total cost can be reduced. This becomes clearer by comparing action 25 with action 9,
which is composed of the same set of design alternatives but applied for different
bridges, i.e., a1 for the bridge b1 and a3 for the bridges b2 and b3 . Action 9
requires the same initial cost as action 25, and results in almost the same amount of the
expected discounted total cost, but significantly high conditional probability of system
failure given an earthquake. This strategy seems tricky, and may not be considered in
practical situations where typically the resistances of structures may be designed in a
proportional way to the magnitudes of hazards. However, from a system optimization
point of view, this is the best strategy that satisfies the acceptance criteria given for the
system. It should be noted that in practical situations decision makers might accept
slightly higher costs to further reduce the risk of fatalities (e.g. Action 27 instead of
Action 25 in this example). However, if the objective function and the constraints are
established to fully represent the decision maker's preference, such a subjective
decision may lead to sub-optimal decisions.
-62-
Constrained optimization of component reliabilities in complex systems (Paper II)
Figure 3.7. Expected discounted total cost, and expected fatality and probability of
system failure given that an earthquake occurs.
3.4.3. Discussion
The hierarchical Bayesian approach provides a clear perspective of how the whole
system should be built up using the modules representative of different levels of
analyses. In this example, the transportation network system can be built up with four
modules, i.e., earthquake module represented by the earthquake class BPN, a bridge
module represented by the bridge class BPN, a design module and a consequence
-63-
Constrained optimization of component reliabilities in complex systems (Paper II)
module, see Figure 3.5. These modules can be built up separately, whereas the
interfaces between the modules must be specified. Such a module oriented modelling
in the hierarchical Bayesian approach not only enhances the integration of the
knowledge of different experts, experience and data available at different levels, but
also increases the productivity of risk assessments, since the modules are re-useable.
While only a small number of discrete action alternatives are considered in this
example, there are other cases where a large number of discrete action alternatives or
continuous action alternatives are to be considered. In such cases it is not feasible to
perform the ID analysis for every action, thus adaptation of efficient algorithms for
solving optimization problems under constraints are needed. In this context, IDs serve
as the function in the process of calculating the value of the expected utility and the
values of the quantities for which acceptance criteria are defined which then in turn can
be implemented into optimization algorithms. In the next example, it is shown how this
may be realized using commonly available software tools.
3.5. Example 2
Optimal reliability for components in Floating Production Storage and Offloading
Units (FPSOs) subject to fatigue deteriorations is considered in this example. The main
function of FPSOs is to produce and store oil at offshore oil fields with given
requirements to reliability in production and safety to persons and environment.
Typically considered events of system failure for FPSOs are:
-64-
Constrained optimization of component reliabilities in complex systems (Paper II)
The hull components as described above have basically two functions, namely, to
ensure that the overall ship has a sufficient structural integrity and provide the means
for containing cargo and ballast. Failure of the components of the hull at this level can
be assumed as the events of:
Considering now the individual components as outlined in the above these may be
viewed upon as assembly of plates connected by welded joints. Failure of these
components may lead to:
-65-
Constrained optimization of component reliabilities in complex systems (Paper II)
Thus, the losses or damages at component level may lead to the hull failure or
undesired economic and environmental losses as well as loss of lives given the way the
components are interconnected. The problem in this example is to optimize the target
reliabilities for the welded joints in plate and tank partition components given the
requirements to the functionality/consequence of the ship hull, e.g. the probability of
hull failure. It is emphasized in this example how commonly available software tools
can be used in accordance with the proposed approach. For this purpose a software tool
is developed using Hugin ® for BPN/ID representation and Microsoft Excel ®
(hereafter Excel) for the optimization algorithm as well as the user interface. In the
subsequent section, the overview of the software tool development is illustrated.
In Figure 3.10, the illustration of the hierarchical Bayesian representation of the ship
hull structure is given. Two BPNs in the top of the figure represent the performances of
tanks. The tank performances are characterized by the states of the plates that
constitute the tanks. As is described above, at this level the possible consequences due
to component failures are capacity reduction, explosion and environmental damage due
to leaks. The ID in the bottom of the figure concerns how the component failures may
propagate and lead to further consequences at system level. Here, three attributes of the
consequences are identified, i.e. economic loss, loss of lives and environmental
damage measured in terms of leak intensities. These BPNs and ID are interconnected
as shown in the figure. In the entire ID the conditional probability tables are assumed
established with the help of experts, see e.g. Figure 3.11 (which is the conditional
probability table for node "Explosion_1" as implemented into a Hugin file), whereas
the nodes that represent the components serve as root nodes whose probabilities are
represented in terms of unconditional probabilities, which are derived from the target
reliabilities for welded joints in each components. Therefore, by changing the target
reliabilities for the welded joints which are set in the Excel file, the unconditional
probabilities for the components are changed accordingly. In turn, the corresponding
probabilistic characteristics, e.g. expected total cost or probability of ship hull failure
are changed and stored in the Excel file, see Figure 3.12. This process is made
automatically through ActiveX. The design and service life maintenance cost for the
different welded joints is in general a function of the target reliability in regard to
-66-
Constrained optimization of component reliabilities in complex systems (Paper II)
fatigue failure, and this is implemented as a VBA code in the Excel file. For the
assessment of the relationship between the reliability of the welded joints subjected to
fatigue failure and the service life cost, the iPlan software described in Straub and
Faber (2006) may be utilized. Finally, the optimal target reliabilities for welded joints
are obtained using the Solver add-in provided in Excel – target reliabilities
correspond to "changing cells", and acceptance criteria for the ship hull correspond to
"constraints" in the Solver add-in.
-67-
Constrained optimization of component reliabilities in complex systems (Paper II)
3.6. Conclusions
The present paper proposes a framework for the modeling and the optimization of
reliabilities for components in complex engineered systems subject to requirements
specified in terms of system performance. It is shown how the identification of the
target component reliabilities that are optimal and consistent with given acceptance
criteria for system performance can be treated as an optimization problem with
constraints. Appreciating the perspective that engineered systems are built up by
-68-
Constrained optimization of component reliabilities in complex systems (Paper II)
-69-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
Kazuyoshi Nishijima
Daniel Straub
-70-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
Abstract
In decision making for civil engineering facilities, as well as other societal activities,
the criteria for sustainability are inter-generational equity and optimality. Two
challenging questions must be addressed in this context: How to compare the benefits
and costs among different generations and how to compensate and adjust for the
in-homogeneously distributed benefits and costs between the generations. To address
and answer these questions for engineering facilities, first of all the temporal
distribution of the life-cycle benefits must be assessed. To ensure optimality, the total
life-cycle benefits for the facility must be maximized. In the present paper initially the
normative criteria for sustainability are presented. Thereafter it is demonstrated how
the criteria may be implemented for the purpose of optimization of structural design.
The inter-generational distribution of benefits and the implications for sustainable
decision-making are then illustrated by an example considering the optimal design of
the concrete cover thickness of a RC structure subject to chloride-induced corrosion of
the reinforcement.
Keywords
Sustainability, discounting, life-cycle cost, chloride-induced corrosion,
cover thickness.
4.1. Introduction
A significant amount of research has been devoted to life-cycle analysis for civil
engineering facilities. In recognition of the significant uncertainties associated with the
performance of structures over their service life, decision-theoretical approaches have
been applied for the optimization of structural design, e.g., Rosenblueth and Mendoza
(1971) and Rackwitz (2000). The developed methodological framework facilitates the
optimization of the design of structures such that a balance is achieved between the
benefits achieved through the facility and the costs associated with design and
construction, future costs of inspection and maintenance as well as costs associated
with possible repairs, replacements and failures. Recently, life-cycle analysis has been
utilized to enhance a sustainable development of the built environment, e.g. Rackwitz
et al. (2005), Faber and Rackwitz (2004), Nishijima et al. (2004) and Nishijima et al.
(2005). In this context, focus is shifted from the facilities to a sequence of decision
makers and stake holders, each of which represents a subsequent generation that
benefits from the facility while paying the costs of maintenance, repair, replacement
and other adverse consequences. Although life-cycle analysis is well advanced in the
civil engineering field and has been applied within the context of sustainability, less
attention has been paid to the distribution of costs over time. This distribution is
essential, since it allows for assessing the burden of each generation, and thus indicates
the necessity for an inter-generational compensation when the aggregation of benefits
and costs is not uniformly distributed over time.
-71-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
The present paper initially formulates the criteria for sustainability and thereafter sets
up a multi-decision-maker framework for inter-generational sustainable decision
making. As it will be discussed this framework may also provide a useful basis in any
intra-generational context for organizations involved in decision making concerning
activities with life times significantly exceeding the budgeting periods or the life time
of the individuals responsible for the decision making within the organization. The
optimization of structural design using the suggested framework is illustrated by an
example considering the optimal design of the cover thickness for a RC structure
subject to chloride-induced corrosion. Finally, the temporal distribution of the
life-cycle costs is explicitly assessed, clearly illustrating how the benefits and costs are
unevenly distributed over the generations.
Basically any kind of activity at present has consequences for the future in terms of
benefits and costs. The benefits and costs may not necessarily be expressed in
monetary terms and there are controversial discussions on whether all societal and
environmental consequences can be measured comprehensively in monetary terms, as
discussed in Turner (1992) and Ayres et al. (1998). However, in the present paper,
benefits and costs are assumed to be represented by monetary values for the
convenience of discussion. The temporal distribution of consequences associated with
different activities differs significantly; however, it is difficult to identify activities
which do not have some effect for the future generations. In case of exploitation of
natural resources the benefit is more or less immediate – but the resources exploited
are no longer available for future generations. In case of disposal of toxic waste the
situation is much the same – the benefit is achieved by the present generation but the
potential adverse consequences are likely to be transferred to future generations.
Sustainability is an issue which always has to be kept in mind.
-72-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
The schematic benefit or cost path is illustrated in Figure 4.1. A sequence of decision
makers is assumed along with the time, each representing one generation. Since each
generation considers the benefits and costs and makes decisions from its point of view,
an explicit modeling of the different subsequent decision makers is indispensable,
especially when the pure time preference or loss of life are considered in the utility
function.
The benefits and the costs illustrated in Figure 4.1 correspond to the gross values at
each point in time, i.e., they are not discounted. The i th generation enjoys the benefit
or carries the cost of the hatched area. Since this is the gross value, the same values at
different points in time do not necessarily have the same perceived influence to
different generations, mainly because of the economic growth. Therefore, benefits and
costs should be discounted by the economic growth to ensure the equal treatment
between generations in accordance with the inter-generational equity. Taking into
account the economic growth and disregarding the effect of overlapping generations,
the total utility aggregating benefits and costs can be expressed as:
∞
U = ∑ δ (ti )U i (4.1)
i =1
where U is the total utility for all generations, δ (⋅) is the discounting factor
representing economic growth and U i is the utility for i th generation which begins
at t = ti . Extension of Equation (4.1) to cover also the case of overlapping generations
may be performed as shown in Bayer and Cansier (1999), Bayer (2003) and Rackwitz
et al. (2005), however, the effect of this is of minor importance for the overall
life-cycle benefit assessment. When decision making is subject to uncertainty, the
utilities in Equation (4.1) should be interpreted as the expected utilities. The utility for
the i th generation may be written as:
-73-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
ti +1
U i = ∫ u (t )γ (t − ti )dt (4.2)
ti
where u (⋅) is the utility per unit time and γ (⋅) is the discounting factor within one
generation. The utility within one generation may be discounted by pure time
preference as well as by economic growth, thus
γ (t ) = δ (t ) ρ (t ) (4.3)
where ρ (⋅) is the discounting factor representing pure time preference. Note that the
discounting factor is related to the discount rate, e.g., for δ (⋅) as:
δ (t ) = exp(−δ t ) (4.4)
where δ is the discount rate per unit time.
Each decision in regard to a civil engineering facility results in one specific temporal
distribution of expected utility and thus enables the calculation of the total utility
according to Equation (4.1). To comply with the second criterion for sustainability, i.e.,
optimality, the total utility must be maximized, which in the case where the benefit
function does not depend directly on the decision corresponds to a minimization of the
total cost. However, even if the maximization is performed under consideration of
inter-generational equity in terms of proper discounting as applied in Equations (4.1) -
(4.3), it does not necessarily imply that each generation obtains the same utility from
the facility, as illustrated in Figure 4.1. It is unlikely that each single activity optimized
in the above sense results in a uniform distribution of the utility among the current and
all future generations. Therefore, the transfer of the benefits in terms of, for instance,
man-made capital or natural resources is essential to achieve inter-generational equity,
see Figure 4.2. The distribution of costs over time provides the basic information
required to achieve inter-generational equity, enabling a comparison and a
compensation between the generations through societal activities which are not
necessarily within the civil engineering field.
-74-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
∞ ∞
e−γ t u (t )dt = ∑ e−δ tn ∫
tn+1
∫
*
e−γ (t −tn )u (t )dt (4.5)
0 tn
n =1
where u (t ) is the (expected) utility per unit time at time t . The equivalent
sustainable discount rate may be interpreted as the one which, if applied to a decision
problem with the classical one-decision-maker perspective, yields the same total
expected utility as when the decision problem is analyzed from the
multi-decision-maker perspective. In general, it is not possible to obtain an analytical
expression for γ * . However, in the case where consequences are invariant at any time,
the durations of generations τ = tn +1 − tn ( n = 1, 2,3,... ) are constant and the
occurrences of events associated with consequences follow a stationary Poisson
process, the equivalent sustainable discount rate is given as follows:
1 − e −δτ
γ* = γ (4.6)
1 − e −γτ
where δ is the discount rate per unit time by economic growth , ρ is the discount
rate per unit time by pure time preference and γ = δ + ρ , see Faber and Nishijima
(2004). The equivalent sustainable discount rates for several cases are illustrated in
Figure 4.3, where for ρ kept constant at 3% per year or 0% per year for comparison,
the equivalent sustainable discount rates are given as functions of the duration of the
generation τ for several values of δ . The equivalent sustainable discount rate γ * is
smaller than the total discount rate γ , except for the case where ρ = 0 . If the
discount rate consists only of pure time preference ( δ = 0 ), the equivalent sustainable
discount rate is zero, i.e., within the classical framework, the benefits and costs should
not be discounted at all to obtain the same utility function as with the
multi-decision-maker framework. If the discount rate by pure time preference is set
equal to zero and the discount rate by economic growth is set to equal to 5%, the
equivalent sustainable discount rate is equal to 5%, regardless of the duration of the
generation. This means that if the discount rate is only due to economic growth, the
multi-decision-maker framework is identical to the classical framework. In general, the
-75-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
Figure 4.3. Equivalent sustainable discount rate γ * for the case of constant utility per
unit time.
discount rates which have been applied so far in the classical framework are too large,
i.e., are leading to non-optimal solutions from the view-point of sustainability.
4.4. Example
Optimal life-cycle cost based design of the concrete cover thickness of a RC structure
subject to chloride-induced corrosion of the reinforcement is considered. The intended
service life time is assumed to be infinite, meaning that the desired function of the
structure is unlimited in time. The applied probabilistic modeling of the degradation
over time is included in Annex A for simple reference and more details are provided in
Faber et al. (2005). The expected life-cycle costs are assumed to consist of the initial
costs CI , the expected repair costs E[CR ] and the expected failure costs E[CF ] ,
which all depend on the optimization variable dnom , i.e. the concrete cover thickness.
It is assumed that visual inspections are made every ΔtI = 5yr and that an indication
of visible corrosion automatically triggers a repair. In accordance with the
renewal-theoretical approach outlined in Faber and Rackwitz (2004), it is assumed that
in case the structure fails, it is reconstructed. Following a repair or a reconstruction, the
structure is assumed brought back to its original state, i.e., described using the same
probabilistic model as a new structure. The realization of the structure after repair or
reconstruction is assumed to be independent from previous structures. Furthermore,
inspections are modeled as being perfect, i.e., visible corrosion is detected with
probability 1 at an inspection. The costs of initial design, repairs and failures are
modeled as:
-76-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
CR = aRCI (4.8)
CF = aF CI (4.9)
with parameter values in Table 4.1, where also the assumed discount rates are
summarized. The initial cost CI is assumed to consist of a fixed cost and the cost
depending on the cover thickness, and the repair cost CR and the failure cost CF are
assumed to be proportional to the initial cost.
Table 4.1. Cost and discount model.
-77-
Interr-generationnal distributiion of the liife-cycle cost of an enggineering facility (Papeer III)
t −1
PF (t ) = qF (t ) + ∑ ( PR ( s) + PF ( s) ) ⋅ qF (t − s ) (4.11)
s =1
PR (1yr) = qR (1yr)
( (4.12
2)
for t = 1yr .
-78-
Interr-generationnal distributiion of the liife-cycle cost of an enggineering facility (Papeer III)
The recursive
r foormulationss Equationss (4.10) to (4.13)
( are obtained
o as follows. Th he set
of poossible diffferent evennts leading to a repairr at time t can be spplit into sub bsets:
Thesse subsets are
a differenttiated by thhe time of th he last repaair or reconnstruction, which
w
can occur
o mes t − 1yr , t − 2yr , ettc. until 0yyr ; the latteer corresponnding to thee case
at tim
wherre no repairr or reconstruction haas been perfformed prevviously. Thhe probabiliity of
failure at time t is obtainned analoggously. As the t decisionn rule just specifies qR (t )
and qF (t ) , this recursive formulation
f n can be app plied for anny kind of decision ru ule, as
long as the struucture is reppaired at soome point in n time and reconstructed after faailure,
resullting in idenntical but stochasticallyy independeent structurees.
Oncee the probaability of repair and thhe probabiliity of failurre at each ppoint in tim me are
obtaiined, the calculation of the expected
e coosts is strraightforwaard. Repairr and
reconnstruction after
a failuree can be caarried out ata each insppection timee, the inspeection
interval being 5 years. Figure 4.6 shoows the disttribution off costs over time for seeveral
coveer thicknesses. These costs
c are not discounted. The expeected cost ffor each point in
time consists off the expectted repair costs and the expected failure costts. The exp pected
failure costs aree much smaaller than thet expected d repair costs in the ppresent exam mple,
that is why thhe expectedd total costts are closse to the expectede reepair costs.. The
(non-discountedd) expectedd costs decrrease with time for all cases in F Figure 4.6. This
tendeency is duee to the factt that the faailure rate, which is thhe probabiliity of failurre per
unit time condditional on survival up u to timee t, is decrreasing witth time fo or the
considered deteerioration mechanism.
m . When thee structure performs ppoorly (i.e., the
realizzations of the randoom variables are un nfavorable), it will be repaireed or
reconnstructed allready afterr a few yeaars. After each
e repair or reconstrruction, thee new
-79-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
structures are identical but stochastically independent of the old ones. A structure with
an initially bad performance will thus eventually be replaced by one with a good
performance. The expected value of the performance of the structure is therefore
increasing with time and the expected costs of failures and repairs are decreasing. It
should be realized that this tendency depends strongly on the assumed dependency
between subsequent realizations of the structure as well as the characteristics of the
failure rate function.
Figure 4.6. Temporal distribution of expected costs (at 5 year intervals), not
discounted.
which should be minimized. CR ,t and CF ,t are the costs of repair and failure at time
t respectively. Since different discount rates are applied within the generations and
between the generations, the duration of each generation τ must be specified. Figure
7 shows optimal cover thicknesses for different values of the durations of the
generations. With increasing duration of generations, the optimal cover thickness
becomes smaller. This is because the “equivalent sustainable discount rate” becomes
-80-
Interr-generationnal distributiion of the liife-cycle cost of an enggineering facility (Papeer III)
largeer as the durration of thee generationn becomes longer, see Equation ((4.6) and Figure
F
4.3, and conseqquences in the future are, thereffore, valuedd less. The case wherre the
durattion of a generation
g is infinite correspondds to the classical
c liffe-cycle anaalysis
wherre only onee decision maker
m is asssumed. Ass observed in Figure 4.7, the op ptimal
coveer thickness varies signnificantly wiith the duraation of the generationss, pointing to
t the
impoortance of consideering the problem from the viewppoint of the
multi-decision-m makers.
-81-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
Figure 4.8. Discounted expected costs for each generation (with a duration 25 years).
4.5. Discussion
Figure 4.8 clearly shows the inhomogeneous distribution of costs among the
generations. In particular the first generation pays much more than all following
generations. In order to comply with the first criterion for sustainability,
inter-generational equity, the temporal differences must be compensated by other
means (e.g., by transferring the benefits on capital stocks and natural resources). Such
compensation is beyond the scope of the analysis as presented in this paper, as it
requires that all societal activities must be considered simultaneously within the
multi-decision-maker framework. In this context it is reminded that, although in the
presented example it is the first generation which pays most, many societal activities
have large consequences in the future while only the current generation directly
benefits from them.
The analysis presented here ensures that the second criterion of sustainability,
optimality, is fulfilled in such a way that it is consistent with the first criterion. It seems
paradoxical at first that by consideration of multi-decision-makers (which is required
by the inter-generational equity criterion), the optimal design which fulfills the
-82-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
Finally, it is important to note that whereas the present paper specifically addresses the
problem of sustainable decision making in an inter-generational context the developed
framework also may be valuable for the decision making in intra-generational contexts
involving several decision makers and stakeholders as well as budgets over time. This
is the situation when decision making is considered in organizations which are
responsible for the design, construction and operation of engineering facilities such as
high-way agencies. In such organizations both budgets as well as the persons involved
in the decision making have a substantially shorter life time than the facilities they are
responsible for. The multi-decision-maker framework may serve to set guidelines or
rules for the decision making in such contexts, to help avoid decisions which for the
fulfillment of preferences of individuals may yield a short term benefit but from an
overall life-cycle perspective induce economical losses for the organization.
Furthermore, the framework can be utilized as a rational basis for long term budgeting.
4.6. Conclusions
It is demonstrated how the inter-generational distribution of the life-cycle cost of an
engineering facility can be assessed. This is of importance for ensuring sustainability
of the facility, whereby the considered criteria for sustainability are inter-generational
equity and optimality. It is shown how decisions regarding an engineering facility must
be optimized in order to comply with these criteria and it is outlined that the results of
the optimization may be used as a basis for a broader discussion regarding
inter-generational equity taking into account all kinds of societal activities. Finally, it is
highlighted that the developed framework also may provide a useful basis in any
intra-generational context for organizations involved in decision making concerning
activities with life times significantly exceeding the budgeting periods or the life time
of the individuals responsible for the decision making within the organization.
-83-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
4.7. Annex A
For easy reference, the applied probabilistic model for deterioration of concrete
structures subject to chloride-induced corrosion is presented in the following. The
modeling corresponds to DuraCrete (2000) and here follows Faber et al. (2005), where
additional details of the models are described.
Corrosion initiates at the reinforcement, when the chloride concentration has reached
the critical chloride concentration CCr . The ingress of chlorides in the concrete is
described by Fick’s second law of diffusion. Based on this model, the random variable
TI representing the time until corrosion initiation is calculated as:
1
⎛ ⎡ −1 ⎛ ⎞⎤
−2
⎞1− n
d2 Ccr
TI = ⎜ ⎢ erf ⎜1 − ⎟⎥ ⎟ (4.15)
⎜ 4k k k D ( t ) n ⎢⎣ ⎜ AC ⋅ ( w / c) + ε C ⎟⎥ ⎟
⎝ e t c 0 0 ⎝ S S ⎠⎦ ⎠
The time until visible corrosion, corresponding to minor cracking and coloring of the
concrete surface, can be determined based on experience. By adding the propagation
time TP to the initiation time TI , the limit state function for visible corrosion is
written as:
gVC ( t ) = X I TI + TP − t (4.16)
The time between visible corrosion and failure is, for illustrative purposes, represented
by the time TP . The limit state function for failure is thus:
g F ( t ) = X I TI + TP + TP 2 − t (4.17)
Note that the model does not account for the dependency between the propagation time
TP 2 and the environmental parameters or the cover thickness.
The values of the distribution parameters for the random variables in Equations (4.15)
to (4.17) can be obtained as functions of indicators, see Faber et al. (2005). For the
considered example, they are stated in Table 4.2. These values are representative for a
concrete with ordinary Portland cement in a splash environment.
The probabilities of the events visible corrosion and failure can be obtained by e.g.
Structural Reliability Analysis (SRA) or simulation techniques.
-84-
Inter-generational distribution of the life-cycle cost of an engineering facility (Paper III)
-85-
A budget management approach for societal infrastructure projects (Paper IV)
Kazuyoshi Nishijima
-86-
A budget management approach for societal infrastructure projects (Paper IV)
Abstract
Life cycle costing analysis is broadly applied as a tool for decision support for civil
engineering structures, whereby the expected total cost over the life cycle of the
structure is advocated as the objective function to be minimized. The present paper
takes the new perspective of considering the problem from a budgeting allocation
problem where the aim is to optimize the allocation of budgets for the purpose of
maintaining the operation of the portfolio of structures. Whereas all the consequences
associated to the project must be taken into account in the life cycle costing analysis, it
is important to distinguish the financial costs which must be paid from the user costs
which represent the follow-up consequences, i.e., opportunity losses. This is because
only the costs to be paid are related to the budget. The present paper proposes an
approach to determine the optimal amount of budget and the optimal maintenance
decisions, considering these two types of cost.
Keywords
Objective function, resource allocation, life cycle optimization.
5.1. Introduction
Over the last decade life cycle costing analysis has gained a widespread interest as a
tool for decision support in civil engineering, e.g., Rosenblueth and Mendoza (1971),
as well as in many other engineering fields. It has been appreciated in research and
practice that the efficiency of engineering projects must be assessed with due
consideration of all benefits and costs induced by the projects on time scales
representative for the actual duration of the projects; only when the life-cycle benefits
are larger than the corresponding costs can an engineering project be considered
feasible, e.g. Rackwitz (2000). The feasibility of engineering projects such as societal
infrastructure must thus be assessed considering all phases throughout their life-cycle –
from the concept phase until the decommission.
As opposed to most private business initiatives, infrastructures built for the purpose of
facilitating the development of society serve functions or are in other ways associated
with benefits and/or costs which on the time scale reach well beyond the duration of
the generations who decide to build them. To ensure a sustainable societal
development, i.e., a development which aims to optimize the objectives of not only our
own generation but also those of the future generations, the assessment of life-cycle
costs must take into account the costs implied for future generations. To this end, life
cycle costing analysis together with an appropriately chosen discounting function, see
e.g., Rackwitz et al. (2005), provides a consistent rationale. The objective function to
be minimized is, in many cases, the expected total cost under the assumption that the
benefits from structures are independent of the decision variables, taking the follow-up
consequences, e.g. reduced benefits due to unavailability, into account as user costs.
-87-
A budget management approach for societal infrastructure projects (Paper IV)
Turning our focus to practical situations, however, the decision makers which are
responsible for the maintenance of portfolios of structures request budgets which are in
excess of the expected total costs. The reason for this is obviously in part that their
success as decision makers is measured in terms of whether they are able to meet their
requested budgets and at the same time are able to keep their portfolio of structures in
operation. It may well be that they request more if the lack of budget leads to serious
consequences such as user costs associated with reduced functionality of roadway
systems. Given this practical constraint, an optimal decision which minimizes the
expected total cost does not necessarily lead to an optimal budgeting from a societal
point of view, corresponding to a resource allocation of the society maximizing the
societal net benefit. Thus the optimization of the decision and the total budget by
maximizing the societal benefit becomes an issue in the context of optimal societal
resource allocation.
-88-
A budget management approach for societal infrastructure projects (Paper IV)
the objective function to be minimized is the (discounted) expected total cost including
follow-up consequences such as user costs. In this regard, it can be said that the
optimization by the minimization of expected total cost implicitly assumes a “perfectly
flexible budgeting”, namely, a situation where the budget is always available when
needed. For the purpose to assess the optimal amount of budget required for a project
or projects, however, this may not be appropriate. A structure which has reduced
availability due to failure or the need of repair works may not be rehabilitated due to
insufficient budgets, which in turn may lead to additional user costs. The optimal
budget allocation may not correspond to the expected total cost. In order to maximize
the net benefit, the budget allocation, the financial costs and the user cost must be
considered simultaneously.
In a broader sense, the objective function should be an aggregated utility, in which all
the preferences of the decision maker are included, see e.g., Faber and Maes (2003). In
practical situations, a decision maker may be precautious in a sense that he/she
requests more budget than the expected cost in order to ensure a successful
management of projects. In the following section, the net benefit is proposed as a
utility function to represent the preferences of the decision maker.
⎧ B − K − ΔB(e) (C (a, e) ≤ K )
NB = ⎨ (5.1)
⎩ B − C (e, a) − ΔB (e) − ΔB(e, ' C > K ') (C (a, e) > K )
where K is the allocated budget, ΔB(e) is a user cost corresponding to the event e ,
C (a, e) is the financial cost corresponding to (a, e) , a is the decision variable and
ΔB(e, ' C > K ') is a user cost induced by the possibly insufficient budget following the
event e , see Figure 5.1. If the financial cost C (a, e) does not exceed the budget K ,
the net benefit is the difference between the benefit and the sum of the budget and the
user cost associated with the event e . Here it is assumed that the unused part of the
budget within a budgeting period is not transferred to the next budgeting period which
is a commonly known difficulty in the public sector. If the financial cost C (a, e)
-89-
A budget management approach for societal infrastructure projects (Paper IV)
exceeds the budget K (Budgeting failure), an extra budget must be asked for in order
to reinstate the reduced availability, which will be provided at some later point in time,
e.g. the subsequent budgeting period. Until the extra budget is obtained, the
availability remains reduced, causing the additional user cost ΔB (e, ' C > K ') .
As the amount of budget increases, the probability of budgeting failure P (C > K ) and
the net benefit decreases, and vice versa. The optimal budget K * and the optimal
decision variable a * , e.g. concerning inspection and maintenance activities are
obtained by maximizing the expected net benefit E[ NB ] :
E[ NB ] = ∫ NB ( a, e, K ) dP (e; a ) (5.2)
E
where E is the set of possible events e and P (e; a ) is the probability of the
occurrence of the event e given the decision variable a .
5.3. Example
-90-
A budget management approach for societal infrastructure projects (Paper IV)
⎧ K + ΔB ( e ) (C ( Δ t I , e ) ≤ K )
CT = ⎨ (5.4)
⎩C (e, Δt I ) + ΔB (e) + ΔB (e, ' C > K ') (C ( Δt I , e) > K )
CT can be considered the total cost including all consequences and is thus referred to
as the “total cost” in the subsequent. It should, however, be noted that the total cost
represented by Equation (5.4) differs from the typical definition in commonly applied
life cycle analysis in the sense that it includes the budget and possible additional user
cost due to insufficient budget. The term ΔB (e, ' C > K ') accounts for the effect of
insufficient budget to the reduction of the net benefit. Still, the net benefit defined by
Equation (5.2) is maximized by minimizing the ‘total cost’ defined by Equation (5.3).
E = {(e0 , eR , eF ); for all years and all elements in all structures} (5.5)
For the purpose of the illustration but with no effect on generality, it is assumed that
the inspections are made visually and the probability of detection of corrosion is
assumed to be equal to one, i.e., perfect inspections. As long as the budget is sufficient
for performing the necessary repairs of the corroded elements, those are assumed
performed in connection with the performed inspections. It is further assumed that the
repaired elements are brought back to their original states, i.e., described using the
same probabilistic models as those for new elements and the realization of the new
elements are independent of the previous elements. The basic characteristics of the
probabilistic modelling of deterioration are provided in the next section.
-91-
A budget management approach for societal infrastructure projects (Paper IV)
Figure 5.2. Probability of repair (left) and probability of failure (right) for a given
realization of element.
Figure 5.2 shows the probability of repair and failure at time t after construction,
repair or recovery due to failure for a given realization of an element in the case of
ΔtI = 5 . Both the probability of repair and failure are calculated by Monte Carlo
simulation. The probability of repair is different from zero only at iΔtI , ( i = 1, 2,3,... ).
This is because the repair is made only if visual corrosion is observed at the inspection.
On the other hand, failure can occur at any point in time. The probability of failure
over time varies significantly as the inspection interval changes. When the inspection
period is small, e.g., Δt I = 1 , the probability of failure is low, since more elements are
already repaired. In contrast, if the inspection period is large, elements may fail more
frequently before repair due to the few inspections. Thus, the inspection period affects
both the probability of repair and the probability of failure. It should be mentioned that
the time axis in the figure does not necessarily represent the structure age after the first
-92-
A budget management approach for societal infrastructure projects (Paper IV)
installation, since a structure may have been repaired or replaced. When a structure
performs poorly (i.e., the realizations of the random variables are unfavorable), it will
be repaired or reconstructed relatively early. After each repair or replacement, the new
structures are identical but stochastically independent of the old ones. A structure with
an initially bad performance will thus eventually be replaced by one with a good
performance. This is why the probabilities of failure and repair decrease after their
peak.
C = CI + CR + CF (5.6)
These costs do not include any user costs associated with the repair actions. The user
costs are considered separately in terms of the reduced benefits ΔB(e) . It is assumed
that the reduced benefits ΔB(e) are additive and proportional to the number of
repaired elements. Due to the uncertainties associated with the physical process of the
deterioration, the maintenance costs can be considered as random variables. As the
inspection interval decreases, the repair cost increases, while the probability of failure
decreases, and vise versa. The additional user cost due to the lack of budget is assumed
to be proportional to the user cost for repair:
The total life cycle period considered for the maintenance planning is 200 years and
budgeting is assumed to be made annually. The discount rate is assumed to be 2% per
year equivalent to the economic growth per capita. The discount rate by time
preference is neglected in this paper for simplicity. This may be justified for short
budgeting periods, i.e., 1 year, by the result in Nishijima et al. (2007). The cost
parameters assumed in this example are summarized in Table 5.1 together with other
parameters.
-93-
A budget management approach for societal infrastructure projects (Paper IV)
Number of structures 50
Number of elements in each 100
structure
Total life cycle time to be 200 years
considered
Discount rate by economic growth 2% per year
Inspection cost for each structure 1
Repair cost for each element 1
Failure cost for each element 10
User cost for each repair 1
User cost for each failure 10
Multiplying factor g 2,10 and 100
First, the optimization is made for the optimal budget for each year, for a given
inspection interval. Figure 5.4 (left) shows an example of the optimization of the
budget. The expected total cost E[CT ] , which is defined in Equation (5.4), becomes
large as the multiplying factor, g , becomes large. Accordingly, the optimal budget
which minimizes the expected total cost becomes large as g becomes large. After the
budget for each year is optimized, the expected total costs for all years are summed up
weighted with the corresponding discounting factors. In Figure 5.4 (right), the
discounted expected total costs are shown for each inspection interval. The optimal
inspection interval is obtained as the one which minimizes the discounted expected
total cost. As the multiplying factor increases, the corresponding discounted expected
-94-
A budget management approach for societal infrastructure projects (Paper IV)
total cost increases and the optimal inspection interval decreases. Since the optimal
budget for each year for a given inspection interval is already obtained, the optimal
combination of budget and inspection time is derived, see Figure 5.5. The higher
“penalty” due to the lack of budget, which is represented by the multiplying factor g ,
is reflected in the optimal amount of budget in Figure 5.5. In both cases of g = 10 and
g = 100 , the expected financial costs remain the same, while the optimal budget is
higher in the case of g = 100 than in the case of g = 10 , reflecting the precautionary
attitude toward larger consequences due to the lack of budget. It should be mentioned
that the periodic fluctuations of the expected financial cost and the optimal budget
come from the different number of structures to be inspected as mentioned in Section
5.3.2.
Figure 5.4. Optimal amount of budget at 20th year in the case of inspection interval: 5
years (left) and optimal inspection intervals (right).
-95-
A budget management approach for societal infrastructure projects (Paper IV)
Figure 5.5. Optimal budget and expected financial cost at each year for g = 10 (left)
and g = 100 (right), (not discounted).
5.4. Discussions
In the example the features and advantages of the proposed approach are shown
considering maintenance planning for RC structures. The approach works especially
well in the case of relatively high probability of occurrence of adverse events and
relatively low consequences. For the case where the occurrence probability of adverse
event is relatively small and the consequence is relatively large, e.g., floods or
earthquakes, the annual budget approach may not work well. In such situations,
establishment of a fund shared by projects, which corresponds to the integration of
projects into one portfolio, may be a good strategy. However, the basic idea in the
proposed approach, namely, optimization of budgeting by maximization of the net
benefit still works in these situations. In the present example, optimal budget
distribution over time has a sharp peak, which is inconvenient in practical budgeting.
However, the budget distribution will be averaged out by considering a portfolio which
is composed of structures of different ages. Thus, the budget distribution over time
shown in the example is due to the fact that only structures whose ages are identical are
considered; not indicating a limitation of the present approach.
In regard to the net benefit induced by a project, it is assumed that the unused portion
of the budget in the case where the cost does not exceed the budget is lost. However,
this can underestimate the net benefit, since the unused portion of the budget can be
spent for relevant activities: in the case of the example, for instance, it could be used
for additional (unplanned) inspections. In applications, this aspect should be properly
taken into account.
Finally, the assumption made in the simulation that repairs are made immediately after
inspections if necessary whether or not the budget is available, may not be suitable if
the repair time is crucial. The repair time is, in general, dependent on when the budget
is available, therefore, the budget for one year does affect the time for repair, which
-96-
A budget management approach for societal infrastructure projects (Paper IV)
must be reflected in the simulation of deterioration. The repair time also affects the
user cost associated with the delay of repair due to a possible insufficient budget. As
the delay increases, the user cost increases.
5.5. Conclusions
Optimal decision making for maintenance of structures is addressed from a societal
perspective as an optimal budget allocation problem. An approach to find the optimal
budget to be allocated and the corresponding optimal inspection and maintenance
strategy is proposed. Thereby the expected net benefit is adopted as the objective
function to be maximized. In addition to the user costs associated with repair activities
the user cost which might result from postponed repair and consequential reduced
availability due to insufficient budget is taken into account.
The proposed approach provides a rational framework for decision makers responsible
for the budgeting and planning of maintenance activities for portfolios of structures
and leads to optimal budgets which are consistent with the adverse consequences of
possible insufficient budgets. For the purpose of illustrating the application of the
proposed approach the problem of maintenance planning for a portfolio of RC
structures subject to chloride-induced deterioration is considered. The example clearly
shows that the optimal budgets differ from the commonly applied expected total costs
and this also has an effect on the optimal choice of inspection plans.
-97-
Societal performance of infrastructure subject to natural hazards (Paper V)
Kazuyoshi Nishijima
-98-
Societal performance of infrastructure subject to natural hazards (Paper V)
Abstract
The present paper proposes a methodology for assessing the effect of different design
and maintenance policies for infrastructure on societal economic growth. The approach
adopted takes basis in the general economic theories and economic models, and
provides an interface between economics and civil engineering with which the
engineering knowledge can be reflected in the economic models. The proposed
methodology can be utilized by societal decision makers to identify the optimal
investments into infrastructure for ensuring sustainable societal development. An
illustrative example is provided considering sustainable decision making in regard to
design and maintenance of infrastructure subject to natural hazards. Thereby the
advantage of the proposed methodology is shown; it enables one to analyze the
economic growth and the associated uncertainties corresponding to different design
and maintenance policies for infrastructure.
Keywords
Sustainability, societal decision making, reliability theory, economic theory.
6.1. Introduction
Sustainable societal development has become an issue of increased and wide spread
societal attention especially during the last two decades. The tremendous economic
developments of former third world nations such as China and India and the general
impact of globalization have put even larger pressures on our limited natural resources
and fragile environment. Faced with an ever increasing amount of evidence that the
activities of our own generation might actually impair the possibilities for future
generations to meet their needs it has become a political concern that societal
development must be sustainable. The issuing of the famous Brundtland report “Our
Common Future” (Brundtland (1987)) forms a milestone on the political arena. This
important event has enhanced the public awareness that substantial changes of
consumption patterns are called for and has further significantly influenced the
research agendas worldwide; it is fair to state that “sustainable development” has
strongly influenced the consciousness and the moral setting in society.
Recent disasters caused by natural hazard events, e.g. the tsunamis in Southeast Asia in
2004 and the flood induced by the hurricanes in the United States of America in 2005,
have proven the importance of infrastructure in society and revealed how societies in
both developing countries and developed countries supported by infrastructure are
vulnerable to natural hazards. Recognizing the lesson learned from these recent
disasters it is necessary to reconsider the framework for identifying the optimal level of
reliability of infrastructure in regard to the performance with due consideration of the
role that the infrastructure plays for societies.
-99-
Societal performance of infrastructure subject to natural hazards (Paper V)
Infrastructure such as road networks, water and electricity distribution systems assists
economic growth. Aschauer (1989) has reinforced this perception by showing that
investment into infrastructure has a strong explanatory power for societal productivity
taking up the case of the United States of America. A number of studies have
confirmed and generalized this observation; some of these studies, however, claim that
the estimated return rates of investment into infrastructure might be biased, see the
review paper by Gramlich (1994).
In the field of civil engineering, the life cycle cost (LCC) optimization concept has
gained a reputation as being a means for identifying optimal designs as well as
maintenance strategies for infrastructure with due consideration of possible
consequences and proper discounting for future expenses. More recently, the LCC
optimization concept often has been applied in the context of sustainable decision
making for infrastructure projects. However, the application of the LCC optimization
concept in this context may not be appropriate since it tends to focus on the marginal
analysis of the benefits and the costs of projects. For instance, the LCC optimization
concept implicitly assumes that the necessary budget is available whenever it is
needed, which in practice is not necessarily true. Nishijima and Faber (2006) discuss
this issue taking into account the opportunity costs that the lack of budget may incur.
Furthermore, the LCC optimization concept does not aim to identify how to optimally
allocate limited resources into different projects; it primarily addresses the
optimization of each individual project or a portfolio of projects assuming these
projects are in any case undertaken. This is especially problematic in the context of
sustainable decision making, since sustainability fundamentally concerns the issue of
allocation of limited resource in different societal sectors and projects. From this
perspective, the optimization problem in the context of sustainable decision making
should be formulated as: 1) given the amount of investment into the civil engineering
sector, how much of the investment should be directed to new construction and
maintenance works respectively and then 2) at the level of societal decision making
how much of the investment should be allocated to the civil engineering sector.
Whereas the latter optimization is conducted from the perspective of societal decision
makers, the former optimization is a civil engineering issue. However, these two
optimizations have never been discussed jointly due to the lack of the interface
between civil engineering and economics.
Economics plays the central role in analyzing the development of society in the most
aggregated way. It considers not only economic development but also environmental
issues, societal preferences regarding e.g. issues of human safety and inter- and intra-
generational equity etc. The general discussion on the implications of sustainability is
also ongoing in the field of economics, although no agreement is yet established, see
e.g. Perman et al. (2003); the present paper assumes that the agreement on implications
of sustainability should be made in the general economics. Therefore, the present paper
does not aim at defining the objective function and constraints concerning sustainable
-100-
Societal performance of infrastructure subject to natural hazards (Paper V)
With this background the present paper proposes an interface between the general
economic theories and civil engineering. The proposed methodology takes basis in the
methodology proposed by Nishijima and Faber (2007c). However, an extension is
made such that the losses of infrastructure capital due to natural hazards can be
considered in an explicit probabilistic manner. After formulating the methodology this
is applied for the investigation of the effect of different target reliabilities for
infrastructure facilities on the economic growth and the degree of uncertainties
associated with the economic growth.
Two issues are addressed in the present paper. The first concerns the reliability of
infrastructure facilities. Economic models must be able to account for the different
reliabilities of infrastructure facilities resulting from different design and maintenance
policies. In general, the deterioration rate of infrastructure depends on the target
reliability in regard to any type of reduction of performance of the infrastructure and
thus depends on the policy in regard to design and maintenance. Usually, in the field of
economics the deterioration rate is estimated directly or indirectly based on historical
data, see e.g. Aschauer (1989), Gramlich (1994) and Greenwood et al. (2000). Using
historical data as a basis, however, there is no possibility to reflect the effect of new
policies on the future deterioration of infrastructure facilities. The proposed idea in the
present paper is that the reliabilities of infrastructure facilities in the engineering sense
-101-
Societal performance of infrastructure subject to natural hazards (Paper V)
can be related to the deterioration rates in an economic sense. Secondly, two different
types of investments into infrastructure should be differentiated; 1) the investment into
new construction of infrastructure facilities, which will increase the economic output
through the increased infrastructure capital stock, and 2) the investment for achieving
higher reliability of infrastructure facilities, which does not directly increase the
economic productivity but improves the durability of the structures and prolongs their
lifetime. The distinction between these two different types of investments is realized by
assessing the infrastructure capital stock by physical units, as opposed to monetary
units.
-102-
Societal performance of infrastructure subject to natural hazards (Paper V)
Figure 6.1. Focused role of infrastructure in an economic context, after Nishijima and
Faber (2007c).
The technology currently available determines the level of economic output given the
amounts of different types of capitals, e.g. human capital, physical capital etc. This
relation is in the field of macroeconomics often represented by a production function
as:
The production function can be estimated using historical data by time-series analysis
and/or cross-sectional studies. Thereby the capitals are measured in physical terms e.g.
kilowatts of electricity generating capacity or length of road or in monetary terms (by
multiplying the amount measured in physical units with the corresponding prices).
However, for the present purpose it is important to measure the infrastructure capital in
-103-
Societal performance of infrastructure subject to natural hazards (Paper V)
physical terms since otherwise the investment for achieving higher reliability and the
investment for increasing the amount of infrastructure cannot be distinguished. Several
datasets and estimated production functions in regard to several types of infrastructures
are available, e.g. Canning (1998) and Canning and Bennathan (2000).
9 They consider the uncertainty of capital depreciation in terms of uncertain changes of the monetary
value of capitals. Thus, the depreciation therein does not concern the change of the amount of
physical capital due to e.g. natural hazards.
-104-
Societal performance of infrastructure subject to natural hazards (Paper V)
Rt ∈ Ω F ,t (6.3)
In cases where the failure domain consists of m independent failure event sets
Ω F ,t (i ) ( i = 1, 2,..., m ) and Ω F ,t = ∪ i Ω F ,t ( i ) , the conditional probability of failure can
be written as:
⎡ ⎤
pF ,t = P ⎢ Rt ∈ ∪ Ω F ,t (i ) ;(t , t + Δt ] | Rt ∉ Ω F ,t ;[0, t ]⎥
⎣ i ⎦ (6.5)
(
= 1 − ∏ 1 − P ⎡⎣ Rt ∈ Ω F ,t ;(t , t + Δt ] | Rt ∉ Ω F ,t ;[0, t ]⎤⎦
i
(i )
)
In this way, the conditional probability of failure defined in terms of Equation (6.4) can
be regarded as a generalized measure of capital deterioration. The advantage of the
definition of infrastructure failure in this manner is that it enables the use of the
reliability theory for the calculation of the probabilities corresponding to the structural
design and maintenance policies for infrastructure facilities whenever probabilistic
models are available. Otherwise the probabilities estimated by expert judgments can be
partly integrated into probabilistic terms in Equation (6.5), hence, it is possible to
combine objective and subjective evaluations in order to quantify the conditional
probability of failure.
where Ktnew (⋅) is the amount of new infrastructure constructed at time t and X t is
the amount of failed infrastructure. In general, X t should be considered as a random
variable. Note that applying expectation operation Equation (6.6) is reduced to
-105-
Societal performance of infrastructure subject to natural hazards (Paper V)
Equation (6.2). Ktnew (at ) is a function of the design policy at at time t and is
written as:
It
Ktnew (at ) = (6.7)
qt (at )
for the infrastructure until time t . In cases where large-scale hazards, e.g. earthquakes
and hurricanes, are of concern the geographical distribution of the infrastructure is also
a relevant factor. Finally, since the budget allocated to infrastructure is divided into the
investments into new construction and maintenance works the following equation must
hold:
Gt = I t + M t ({ai }i =1 , {bi }i =1 )
t t
(6.8)
where Gt is the allocated budget for the civil engineering sector at time t and M t
is the budget necessary for maintenance works. M t is a function of {ai }i =1 and
t
policies {ai* }
t
{bi }i =1 . With these settings it ist possible to identify the optimal design
t
and maintenance policies {bi*} given the budget sequence {Gi }i =1 = {G1 , G2 ,..., Gt } .
t i =1
i =1
Yt = AKtα (6.9)
which is a special form of Equation (6.1). Therein A is the factor that represents the
technology in the society, which is assumed constant. The exponent α represents the
marginal increase of the economic output with respect to the infrastructure capital. It is
assumed that the infrastructure capital is exposed to natural hazards and that the
infrastructure capital can be geographically divided into n segments within which the
-106-
Societal performance of infrastructure subject to natural hazards (Paper V)
failures of infrastructure facilities are perfectly correlated and between which the
failures are independent. Namely, the parameter n represents the relative
geographically affected size of the natural hazards compared with the size of the
society. Furthermore, the occurrence of natural hazards is assumed temporary
independent. Under these assumptions the amount of capital which is lost at time t
can be expressed as:
Nt
Xt = Kt (6.10)
n
where Nt represents the number of failed segments among n independent segments
with the probability of failure p f and follows the binomial distribution with n n
trials and the probability of failure being equal to p f . p f is the probability of failure
within the duration Δt = 1 year. Note that as n becomes large X t converges to its
expected value, E[ Nt ] / n ⋅ Kt = p f Kt thus the equation of capital accumulation is
reduced to the form of Equation (6.2). By substituting Equation (6.10) into Equation
(6.6), the equation of capital accumulation is written as:
It N
ΔK t = − t Kt (6.11)
qt (at ) n
The values of the parameters assumed in this example are shown in Table 6.1. These
values are postulated for illustrative purposes, however, in practice these values can
and should be determined by economic as well as engineering analyses.
Table 6.1. Assumed parameters in the example.
-107-
Societal performance of infrastructure subject to natural hazards (Paper V)
The analyzed economic output paths as the function of the policies and different
numbers of independent segments are shown in Figure 6.2. The figure shows the
median, 5% and 95% of the economic output as a function of time. Figure 6.2 (left)
shows the economic output paths when the number of independent segment is
relatively small ( n = 5 ). The economic growth is faster when policy 1 (lower reliability
associated with lower construction cost) is adopted. However, in a long run the
economy grows more when policy 2 (higher reliability associated with higher
construction cost) is adopted. It should be mentioned that the economic growth path
under policy 1 is associated with larger uncertainty, i.e. results in a less stable
economic growth, compared to the economic growth path under policy 2. The
economic growth paths are more stable in a sense that the uncertainty on the economic
output is smaller when the number of independent segments is larger ( n = 50 ), see
Figure 6.2 (right). The results shown in Figure 6.2 and the interpretation of the results
stated above are coherent with engineering understanding. Furthermore, it is possible
with the proposed methodology to evaluate in a quantitative manner the effects of
different policies on the economic growth within a general economic model
framework.
Figure 6.2. Economic output paths for different policies and different numbers of
independent segments n = 5 (left) and n = 50 (right).
6.6. Discussion
In practical applications, the amount of infrastructure capital losses due to natural
hazards can be readily assessed by risk analysis together with Geographical
Information Systems (GIS), see Bayraktarli and Faber (2007). The design costs and
maintenance costs for infrastructure facilities corresponding to different policies can be
optimally identified using the framework proposed by Nishijima et al. (2008). This
framework models the infrastructure using hierarchical Bayesian networks and
formulates the problem as a constrained optimization problem where the expected
costs are considered as the objective function and the requirements to the performance
of infrastructure, e.g. target reliabilities and acceptance criteria for fatalities, are
-108-
Societal performance of infrastructure subject to natural hazards (Paper V)
accounted for by constraints. These techniques can be incorporated into the proposed
methodology. The assumption that failures of structures are independent generally does
not hold even if the hazard events that affect each structure are independent. This is
because of the presence of modeling uncertainties, e.g. on the resistance of structures,
that may commonly affect all considered structures, see Faber et al. (2007a). Thus the
proposed methodology and the analysis in the example of this paper should be
considered as being conditional on the modeling uncertainties. In general the
integration with respect to the modeling uncertainties is necessary in the analyses of
losses of infrastructure facilities and societal economic growth.
6.7. Conclusion
The present paper proposes a methodology for assessing the effect of different design
and maintenance policies for infrastructure on societal economic growth. The proposed
methodology can serve as a component of a general decision making framework for
optimal resource allocation in the context of sustainable societal development. The
proposed methodology requires the amount of investments into infrastructure as an
input parameter. It incorporates the design policy and the maintenance policy as
decision variables. It provides the sequence of the amount of capital together with the
corresponding economic growth as outputs. In an example the advantage of the
proposed methodology is illustrated; it enables one to analyze in a quantitative manner
the economic growth and the economic stability corresponding to different design and
maintenance policies for infrastructure.
-109-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
7.1. Introduction
In Chapters 3, 4 and 5, the optimization problem of the reliability of individual
structures or groups of structures is addressed. In these chapters, the reliability or
decision variables related to structural performance are optimized based on the
life-cycle cost optimization concept. Strictly speaking, the life-cycle cost optimization
concept can be applied only if the benefit and cost of the project concerned are
assumed marginal in the economy; that is, the economic growth is not affected by
whether or not or how the project is undertaken. Thus, the life-cycle cost optimization
concept may not be appropriate as the philosophical principle for decision making in
cases where the consequences of the decisions are considered as non-marginal.
In practice, there are many situations where the consequences of the decisions are
considered as non-marginal. Such decision situations include, for example, code
making in which the acceptable reliability of structures is controlled, and design and
maintenance strategies on nationwide infrastructure projects. These decisions affect the
capital accumulation of infrastructure and thus, in turn, the long-term development of
the economy. Therefore, in these decision situations a non-marginal economic
framework has to be adopted.
As a first step to develop a general decision framework for facilitating these decision
situations, this chapter examines how the optimal reliability of infrastructure may be
identified within the economic growth theoretical framework. For this, a simplistic
economic model is developed, employing the approach proposed in the previous
chapter for incorporating the reliability of infrastructure in economic models. Using
the developed economic model, it is investigated how the reliability of infrastructure
affects the economic growth and the optimal reliability at each point in time depends
on the economic level. The aim of this chapter is to show the potential that such a
general framework can provide the optimization principle for non-marginal decision
analysis.
The structure of this chapter is as follows. First, the principle of the life-cycle cost
optimization concept is reviewed. Then, the assumptions and limitations of the concept
are pointed out. Second, previous research work on the role of civil infrastructure
within the economic growth theoretic framework are introduced briefly, followed by
some critical reviews on the assumptions made in these works. Third, a simplistic
-110-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
7.2.1. Derivation11
A project is socially profitable if the social welfare is increased through the project.
This is expressed as:
ΔW = W 1 − W 0 > 0 (7.1)
where W is the social welfare function, and W 0 and W 1 are the social welfares
when the project is not undertaken and the project is undertaken respectively. In
general, the social welfare function is a function of many variables that concern the
utilities of all members in the society. However, here it is assumed that the social
welfare function consists of the utility function of a representative household and a
discount factor, and the utility is a function only of the consumption of the household.
Under these assumptions, the social welfare function can be written as:
∞
W = ∫ u (ct )e − ρ t dt (7.2)
0
∞ ∂u (ct ) ∞
ΔW = ∫ Δct e − ρ t dt = ∫ λt Δct dt (7.3)
0 ∂ct 0
where Δct is the perturbation of the consumption from a baseline consumption path,
and λt is the discount factor and is written as:
10 Other derivations can be found in e.g. Ramsey (1928) and Koopmans (1965) in the context of the
economic growth theory.
11 For simplicity, here it is assumed that a representative individual lives for an infinite time. However,
the derivation can be extended for the case where many generations live for finite lifetimes, which is
the situation assumed in the generation-adjusted discounting concept introduced in Chapter 4.
Furthermore, it is assumed that the population is constant over time.
-111-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
∂u (ct ) − ρ t
λt = e (7.4)
∂ct
If the growth rate of consumption is assumed constant and given as δ t = δ , then the
discount factor is obtained as13:
λt = e− (ηδ + ρ )t (7.6)
By substituting Equation (7.6) into Equation (7.3), the criterion for the project
appraisal is finally obtained as:
∞
ΔW = ∫ Δct e − (ηδ + ρ )t dt > 0 (7.7)
0
In cases where several decision alternatives are available for the project, the above
criterion should be applied for the decision alternative that maximizes ΔW .
12 The dot " ⋅ " on the top of symbols represents the derivative with respect to time.
13 The choice of the constant λ0 is arbitrary, thus here it is chosen as λ0 = 1 .
-112-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
Note that the benefit Bt may vary as a function of time but is often assumed to be
independent of the decision variable a . By substituting Equation (7.8) into Equation
(7.7), neglecting the constant benefit term, and taking the negative sign, the objective
function, i.e. the life-cycle cost CT (a) is obtained as a function of the decision
variable a as:
∞
CT (a ) = ∫ Ct (a )e − (ηδ + ρ ) t dt (7.9)14
0
Whenever uncertainty is involved in the cost term, the expectation should be taken as:
∞
C T (a ) = ∫ E[Ct (a )]e − (ηδ + ρ )t dt (7.10)
0
where C T (a) is the expected life-cycle cost as a function of the decision variable a ,
and this should be employed as the objective function in the optimization.
As is clear from the above derivation, the growth rate δt of consumption does not
need to be constant, although in practice it is often assumed constant. It may be also
worth mentioning that whenever uncertainty is involved in the discount rates δ and
ρ , the expectation should be taken as15:
∞
C T (a ) = ∫ E[Ct (a )]E[e − (ηδ + ρ ) t ]dt (7.11)
0
14 If u (ct ) = ln ct , then η = 1 , and it coincides with the formulation of the objective function in
Chapter 4.
15 Here, the cost term is assumed to be independent of the term of the discount factor. However, if they
are not considered as being independent, the expectation operator should be only applied to the
product of the two terms; this may be the case when some of the costs included in the cost term,
which are measured in real terms, may change in accordance with the economic growth.
-113-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
Note that the expectation operator is applied to the discount factor, not to the discount
rates, see e.g. Newell and Pizer (2004) for more discussion.
Concerning the production function that incorporates the infrastructure capital, there
are a number of research works available both theoretically (e.g. Glomm and
Ravikumar (1994) and Duggal et al. (1999)) and empirically (e.g. Aschauer (1989),
Easterly and Rebelo (1993) and Canning and Bennathan (2000) ). There are also some
research works on the estimation of the deterioration rate of infrastructure capital, see
e.g. Gramlich (1994) and Greenwood et al. (2000). However, only a few research
works are available that explicitly treat the deterioration rate of infrastructure capital as
a variable which can be controlled in terms of maintenance policy on infrastructure,
e.g. Rioja (2003) and Kalaitzidakis and Kalyvitis (2004).
For example, Rioja (2003) considers the amount of investment in maintenance work
for the infrastructure (relative to economic output) as a control variable, and the
optimal investment ratio in maintenance work is derived. Kalaitzidakis and Kalyvitis
(2004) extend Rioja's economic model by endogenizing the decision of budget
allocation into both investment in the construction of new infrastructure and
investment in maintenance work for existing infrastructure.
These pioneering works are remarkable in the sense that the deterioration rate is
considered as a variable and can be optimized through the investment ratio into
maintenance work. However, the relations between the deterioration rate and the
investment ratio assumed in the models are not realistic. One of the drawbacks of these
assumptions is that the deterioration rate at any time is dependent only on the current
investment ratio in maintenance work; the current deterioration rate is not a function of
past maintenance policies, and the current maintenance work does not affect the future
deterioration rate. Furthermore, the effect of differing design policies on the
deterioration of infrastructure is not considered.
-114-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
reduce the future costs for maintenance work. Similarly, undertaking maintenance
work at an earlier stage of deterioration would reduce additional maintenance costs in
the future. Thus, the investment in construction and maintenance works for reducing
the deterioration rate can be considered at least partly as an investment into the future.
However, the economic models proposed by those pioneering works may fail to
capture this nature of the investment.
Y (t ) = F ( K (t ), L(t )) (7.12)
Herein, it is furthermore assumed that the capital K (t ) consists only of infrastructure
capital. Assuming that the production function exhibits constant return to scale, the
production function can be reformulated in terms of variables per capita17 as:
Y (t ) F ( K (t ), L(t ))
y (t ) = = = F ( K (t ) / L (t ),1) = f (k (t )) (7.13)
L (t ) L (t )
where y (t ) and k (t ) denote the output and capital per capita at time t , and f (⋅)
represents the production function in terms of the variables per capita. In addition to
these assumptions, it is assumed that the saving rate of the household is exogenously
given as e ( 0 < e < 1 ) and the amount of labor is constant over time18.
The important difference in the economic model assumed here from the models
employed in Rioja (2003) and Kalaitzidakis and Kalyvitis (2004) appears in the
equation of motion for the capital accumulation, especially on the way of modelling
infrastructure deterioration.
Consider the infrastructure constructed at time s . The expected service life time T s
of the infrastructure and associated costs q s for construction and maintenance work
-115-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
are assumed to be a function of the design and maintenance policy a s at time s , i.e.
T s = T (a s ) and q s = q(a s ) . Herein, the associated costs refer to all the costs that are
required in order to realize the target expected service life T (a s ) . Whereas the service
life time of infrastructure is in general a random variable, it is assumed here for
simplicity that the service life time is deterministically represented by its expected
value, T s . Furthermore, it is assumed that the infrastructure provides full functionality
until it exceeds the expected service life time and does not provide any functionality
when it exceeds the expected service life time. Note that for the assessment of the
expected service life time the approach presented in Section 6.4.1 is useful. In this
setting the failure of the infrastructure should be interpreted in a broader sense; the
relevant failure modes include not only physical collapse but also unavailability of
required functionality for any reasons, e.g. severe deterioration, failure to satisfy given
acceptable safety level, and even societal obsolescence19. The expected service life T s
thus defined can be interpreted to represent the reliability of infrastructure; the longer
the expected service life of infrastructure, the higher the reliability of the
infrastructure.
⎧0 (t < s, s + T s < t )
g (t ; s, T s ) = ⎨ s
(7.14)
⎩1 ( s ≤ t ≤ s + T )
κ s (t ) = k s ⋅ g (t; s, T s ) (7.15)
Assume that all the current and future costs for construction and maintenance work for
the infrastructure constructed at time s are invested at time s , and denote the overall
cost per unit capital by q s 20. Since the amount of investment is assumed given
exogenously as i ( s ) = ey ( s ) , the increment k s of the capital due to the investment at
time s is given as:
19 See Section 6.4.1 for the definition of the generalized capital deterioration.
20 If the variables in the economic model are measured in terms of a physical unit, this cost per unit
capital should be interpreted as the multiplying factor for adjusting the difference of the required
amount of resources for different design and maintenance policies.
-116-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
i ( s ) ey ( s )
ks = = s (7.16)
qs q
t t
k (t ) = ∫ κ s (t ) ds + k0 (t ) = ∫ k s g (t ; s, T s )ds + k0 (t ) (7.17)
0 0
The objective function of the dynamic optimization problem here is the social welfare
function, which is defined as:
∞
W = ∫ U (c(t ))e − ρ t dt (7.18)
0
subject to:
c(t ) = (1 − e) f (k (t )) (7.20)
ef ( k ( s ))
ks = (7.16)'
q (a s )
-117-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
t
k (t ) = ∫ k s f (t ; s, T (a s ))ds + k0 (t ) (7.17)'
0
with the initial conditions: the initial amount k0 (0) = k0 of the infrastructure capital,
and the expected service life time of the infrastructure initially available. The set of
decision variables that should be optimized by societal decision-makers is the set of
design and maintenance policies {as }s = 0 .
∞
ef (k * ) k *
= * (7.21)
q* T
where the superscript " * " on the symbols signifies that the quantities are the quantities
at the steady state. The left hand side comes from Equation (7.16), and the right hand
side is obtained from the assumptions made on the deterioration of the infrastructure.
Figure 7.2. Steady state where the increment of the capital exactly compensates the
depreciation.
-118-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
q* *
ef ( k * ) = k (7.22)
T*
From the assumed properties of the production function ( df / dk > 0, d 2 f / dk 2 < 0 ), k *
is maximized when q* / T * is minimized. Since the highest production level leads to
the highest consumption level for a given saving rate, the optimal policy at the steady
state is the policy a * that minimizes q* / T * .
Note, however, that this steady state does not necessarily correspond to the optimal
state in the sense that the consumption is maximized. This is because of the assumption
that the saving rate e is exogenously given. Since the saving rate e corresponds
one-to-one with the amount k * of the capital at the steady state through the relation
given by Equation (7.22), the optimal saving rate that maximizes consumption at the
steady state is characterized by k * as:
* * q* *
*
max c = (1 − e) f (k ) = f ( k ) − * k (7.23)
k* T
Thus, the optimal amount kopt of the capital that maximizes the consumption at the
steady state is obtained as the amount that satisfies the following equation:
q*
f '( kopt ) = (7.24)
T*
where f ' represents the derivative with respect to k . This corresponds to the golden
rule of accumulation for the Solow-Swan model, see Phelps (1961).
The optimization principle obtained from Equation (7.22) for the design and
maintenance policy on infrastructure shows that the sum of the initial cost and
maintenance cost of infrastructure divided by the service life time, i.e. average cost per
unit time, should be minimized. This is intuitively appealing. In order to investigate
this principle further, an illustrative relationship between the expected service life time
T and the average cost q(T ) / T 21 per unit time is shown in Figure 7.3. For a smaller
expected service life time, the average cost per unit time is higher. This is because the
overall cost divided by a shorter expected life time is disproportionately large. On the
other hand, infrastructure with a very long expected service life may be very costly due
to technical reasons and/or may not even be feasible for other reasons, e.g. societal
obsolescence. This is why in Figure 7.3 the average cost per unit time increases sharply
21
Both the cost q(a) and the expected service life T ( a ) are functions of the decision variable a .
However, since the expected service life corresponds to the decision variable one-to-one, the cost is
considered as a function of the expected service life time, i.e. there exists such a function as
q(T ) = q (a) . The variables without the superscript s represent the variables at any arbitrary time.
-119-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
for a very long expected life time. Between these two extremes, the average cost per
unit time moderately decreases as the expected service life time increases.
One of the most relevant differences between the optimization principle obtained here
and the life-cycle cost optimization commonly utilized in engineering decision-making
is that the principle obtained here does not involve failure cost terms, which in the
life-cycle cost optimization play an important role. The explanation for this is: first, in
the economic model considered here (also in most economic models), the loss of
infrastructure due to failure is considered in the deterioration terms in the equation of
the capital accumulation (see Equation (6.2) or Equation (7.17), although in Equation
(7.17) the deterioration term is implicit); second, the reduction of the economic output
associated with the loss of capital is considered through the production function by
substituting a smaller amount of capital due to the loss of capital. Namely, possible
consequences due to the loss of infrastructure are already taken into account. However,
note that although the objective function in the optimization principle obtained above
and the objective function in the life-cycle cost optimization principle are not the
same22, this is not contradictory. In fact, the contexts in which these two principles are
assumed to be applied are different; the life-cycle cost optimization principle is
suitable for marginal decision analysis, and the principle obtained above is suitable for
non-marginal decision analysis.
Figure 7.3. Relationship between expected service life time T (a s ) and average cost
per unit time q(a s ) / T (a s ) at any given time s .
22 Since the objective function derived in this section is the objective function at the steady state, the
objective function in the life-cycle cost optimization should assume zero discount rate for economic
growth, δ = c / c = 0 , in order for the comparison to be meaningful.
-120-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
Note that in this assumption the optimal policy at the steady state corresponds to
T * = 100 years, because the annual average cost, q(T ) / T = 1/ T + T /1002 (see Table
7.1), is minimized at T * = 100 .
-121-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
Table 7.1. Functional forms and parameters postulated in the optimization problem.
The optimized23 service life time in each period and corresponding economic growth
path (denoted by a dynamically optimized policy) are shown in Figure 7.4. It is seen
that the optimal policy, which maximizes the social welfare, is to choose a shorter
expected service life time at an earlier stage of the economy and then to switch to a
longer expected service life time later. It should be mentioned that the optimized
expected service life time after 100 years is not T * = 100 years, which could lead to
the highest steady state. This is because the contribution of the utility of future
generations to the social welfare is small so that higher consumption of earlier
generations is more important to reach a higher social welfare. For comparison
purposes, two other economic paths for different policies are calculated; the economic
growth path in the case where the expected service life time is fixed at 100 years for all
periods (denoted by " T s = 100 years, fixed" in the figure) and the economic growth
path in the case where the expected service life time is incrementally increased from 40
years to 100 years (denoted by "step" in the figure). It is clearly seen that with the
"fixed" policy the economy suffers lower economic output in earlier years, although in
the long run the economic output can reach the highest value. Under the "step" policy
the economy can grow as fast as the economy under the dynamically optimized policy
in the earlier years. However, the economic growth becomes slower in later years
because of higher design and maintenance costs for the infrastructure with the longer
expected service life time. The calculated social welfare is highest in the case of the
dynamically optimized policy, the second highest in the case of the "step" policy and
the lowest in the case of the "fixed" policy. That is, from the viewpoint of the social
welfare maximization the application of the optimal policy at the steady state in the
transition state is suboptimal.
23 Note that this dynamic optimization result is only approximate. The reason is that the time horizon is
truncated at a finite time (in this calculation, 200 years). This might be problematic because with this
truncation a sound strategy for a future generation living just before the 200 years time limit is to
construct infrastructure with a very short expected service life time to increase the economic output
for a short term, without considering severe deterioration of the infrastructure which could occur after
200 years. However, the main conclusion in this section is valid, i.e. that the application of the
optimal policy at the steady state in the transition state is suboptimal, because the social welfare that
corresponds to the "dynamically optimized" policy is calculated using the obtained expected service
life times, and this is larger than the social welfare that corresponds to the policy whereby the
expected service life time of T s = 100 [year] is taken for the whole period of time.
-122-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
Figure 7.4. Economic growth paths (top) and expected service life times (bottom).
-123-
Optimal design and maintenance policy for infrastructure from a macroeconomic
perspective
the high cost but highly reliable infrastructure; the optimal policy on the reliability of
civil infrastructure at each time depends on the current economic output level.
The economic model considered in this chapter is simplistic. There are many
possibilities to extend the economic model; including private capitals and other types
of capitals in the economy; considering technological development, or employing
endogenous economic models; modeling the deterioration of infrastructure in different
probabilistic ways; using a more realistic budget framework especially for maintenance
costs. These extensions are addressed as future research tasks.
-124-
Conclusions and outlook
8.1. Conclusions
In the present thesis, the issues of sustainable decision-making in civil engineering,
especially design and maintenance strategies for structures, are addressed. These issues
are examined from two perspectives, i.e. marginal decision analysis and non-marginal
decision analysis. Within the context of marginal decision analysis, sustainable
decision problems can be formulated as constrained optimization problems. Therein,
the objective function is the expected discounted life-cycle cost associated with the
projects concerned, and the constraints correspond to the societal preferences with
respect to different aspects of sustainability, which are usually represented in terms of
acceptance criteria. In the context of non-marginal decision analysis, sustainable
policy-making on the design and maintenance of civil infrastructure can be discussed
within a macroeconomic framework. Focusing on individual issues in marginal
decision analysis as well as non-marginal decision analysis, the present thesis proposes
methods useful to formulate and solve the constrained optimization problems, and a
methodological approach for implementing the structural performance of infrastructure
in terms of reliability in the economic growth theoretical framework.
In the context of marginal decision analysis, the main constituents of the objective
function are: probability of failure; discount factors; cost terms such as initial cost,
maintenance cost, cost of failure and indirect cost beyond the direct cost associated
with structural failure. In Chapters 2, 4 and 5, these constituents are individually
addressed and investigated from a sustainability perspective. On the other hand, in
Chapter 3 a computational method is presented for formulating and solving the
constrained optimization problems integrating these constituents.
-125-
Conclusions and outlook
-126-
Conclusions and outlook
In Chapters 3, 4 and 5, it is implicitly assumed that the decision analyses are marginal.
That is, these analyses are only valid if the consequences of the decisions are
reasonably assumed not to influence long-term economic growth. However, there have
been some cases where the life-cycle cost optimization concept, which is the typical
concept for the marginal decision analysis, seems to have been applied beyond its
limitation24. Therefore, in order to clarify its underlying assumption and limitation, the
derivation of the life-cycle cost optimization concept from a broader decision principle
is introduced in Section 7.2.
24 In reality, it is often very difficult to check the marginality. Hence, the marginality is often merely an
assumption for the decision analysis. Still in such cases, it is important that the interpretation and
application of the analysis results should be consistent with the assumption.
-127-
Conclusions and outlook
The methods proposed for the marginal decision analysis are directly useful in present
practical decision situations where due consideration of sustainability is required. On
the other hand, the proposed approach for non-marginal decision analysis will serve as
the first step for a further development of the general framework for sustainable
policy-making on civil infrastructure.
However, these scientific achievements are limited. Regarding (1), the marginality of
decisions introduced in the present thesis is difficult to assess in practice; strictly
speaking, any engineering decisions can affect economic growth. Hence, by definition
any engineering decision can be non-marginal. Thus, marginal decision analysis can be
considered as an approximation. Therefore, the concept of marginality should be used
in practice to check if the assumption of the marginality is reasonable in given decision
situations; only when the assumption is reasonable can the marginal decision analysis
be performed. Otherwise, a non-marginal decision analysis should be undertaken.
Concerning (2), it is assumed that the objective function in the decision problems can
be represented by or otherwise converted to a monetary term. However, it is not clear if
the objective function can be fully described by the monetary term. Even if it were
possible, it is still not obvious how the values of different actions and consequences are
objectively quantified in the monetary term. The present thesis does not provide any
justification for this assumption and ways to quantify them. Furthermore, the boundary
conditions in the decision-making process are assumed to be given. However, since the
choice of the boundary conditions affects the optimization of the decision-making,
these boundary conditions should be carefully assessed and chosen. The way in which
they are assessed and chosen is addressed as a future task. Finally, in regard to (3), the
proposed framework is under development in the sense that the role of civil
infrastructure in the economy is considered only in terms of productivity; civil
infrastructure plays other important roles in society such as amenity for leisure and
safety measures to mitigate natural hazards. At the same time, the operation of civil
infrastructure impacts on the quality of the environment. These aspects are not
considered in the proposed framework and the consideration of these aspects is
addressed as an additional future task.
-128-
Conclusions and outlook
8.3. Outlook
Concerning the acceptance criteria for human safety in the context of engineering
project appraisal, a number of approaches have been proposed and utilized in practice.
One common approach in practice is the use of the Farmer diagram, often called the
F-N curve 25 . In this approach, the F-N curve concerning a considered project is
compared with the criterion F-N curve, which is usually provided by regulatory
authorities; the considered project is acceptable if the F-N curve concerning the project
is below the criterion F-N curve. However, several inconsistencies in the use of F-N
curves for project appraisal concerning human safety have been pointed out. Among
others, it is possible that a project that associates higher expected fatality due to
possible accidents in a given time period is accepted, whereas another project that
associates lower expected fatality is rejected. This is because the F-N curve-based
project appraisal essentially concentrates on one extreme feature of the distribution of
fatalities due to different possible accidents disregarding the overall characteristics of
the distribution of the fatalities, see Evans and Verlander (1997). In Evans and
Verlander (1997), it is also shown that the F-N curve-based project appraisal fails to
pass a logical test for a prescriptive criterion. Recently, a promising approach has been
developed based on the life quality index (LQI) proposed by Nathwani et al. (1997).
The LQI is a social indicator that is composed of the gross domestic product per capita,
the life expectancy and the fraction of lifetime spent in working for a living. In this
approach, the LQI is considered to represent the indifference between the
increase/decrease of life expectancy and the decrease/increase of consumption per
capita. Thus, the willingness to pay for life-saving measures can be derived from this
index, see e.g. Skjong and Ronold (1998) and Rackwitz (2003). Further development
of this approach is necessary and is on-going, see e.g. Ditlevsen (2004), Kübler and
Faber (2005), Pandey et al. (2006), and Ditlevsen and Friis-Hansen (2008).
25 An F-N curve represents, for different n , the mean absolute frequency F (n) of the accidents in a
reference time period which associate n or more fatalities in a considered project. Normally, in the
diagram the horizontal axis corresponds to the number n of fatalities and the vertical axis
corresponds to the mean absolute frequency F (n) .
-129-
Conclusions and outlook
The acceptance criteria for environmental impacts, e.g. targets in pollution control, use
of non-renewable resources, and recycling of partly-renewable materials, have been
intensively discussed in environmental sciences and economics, see Perman et al.
(2003) for an overview. Among others, what is relevant to the design and maintenance
on civil infrastructure are: the recycling of the construction materials, e.g. cements,
aggregates in concrete and steel; emissions of carbon dioxide in the construction and
operation of the infrastructure. These should be addressed in the context of life-cycle
optimization problems as well as the life-cycle cost. Thereby, a controversial issue
arises; how to identify the decision alternatives among a set of the Pareto-optimal
solutions in the optimization problem if formulated as a multi-objective optimization
problem, or otherwise which attribute should be taken as the (scalar) objective function
and which should be considered as constraints in constraint optimization problems.
This should be addressed as a challenging research task.
Different choices of the values of discount rates often lead to different conclusions.
One of the examples can be seen in the debate between Nordhaus (2007) and Stern and
Taylor (2007) on the necessity for urgent countermeasures to global warming.
Nordhaus (2007) criticizes the choice of the values of the discount rates in the Stern
Report (2006) (the discount rate for pure-time preference: ρ = 0.001/ yr , the discount
rate for consumption growth: δ = 0.013 / yr , and the elasticity of the marginal utility
of consumption: η = 1 ) by arguing that the resulting real return rate,
r = ηδ + ρ = 0.014 / yr , is far smaller than the real return rate observed in the capital
market. For this criticism, Stern and Taylor (2007) justify their choice by claiming that
1) the discount rate for pure-time preference should be significantly smaller when the
consequences of the decision affect both current and future generations26, and 2) the
capital market is imperfect in the sense that those who do not or cannot participate in
the market (i.e. the young, the poor and the future generations) have little or no
influence on current market behavior. The underlying issue in this debate is the choice
of the perspective to be followed in societal decision-making; normative or descriptive.
The normative perspective seems reasonable for societal decision-making. However,
with this approach it is difficult to directly obtain the value of the discount rate for
pure-time preferences without relying on statistics that may be affected by the capital
market. Furthermore, the justification to assume a positive discount rate for pure-time
preference is a controversial issue, see e.g. Price (1993); the choice of the value of the
discount rate can be subjective.
-130-
Conclusions and outlook
consistent with the state of the art of the philosophical discussions on discounting;
continuous literature reviews and dissemination of the review results to
decision-makers is important.
At the same time, more sophisticated, realistic economic models need to be developed
to fully capture the interaction between infrastructure capital and other capital, and the
socio-economic roles of infrastructure. It is also required to extend the framework in
order to enable one to take in account environmental aspects such as exploitation,
recycling and reuse of non-renewable resources and protection of the biodiversity. The
goal in the development of a non-marginal decision framework is to obtain a
framework that can identify the sustainable policies on infrastructure taking into
account all relevant aspects of sustainability including economic growth, the
socio-economic role and environmental issues jointly.
-131-
References
References
Arrow, K. J. (1962). The economic implications of learning by doing. Review of Economic
Studies, 29, 155-173.
ASCE7-98. (2000). Minimum design loads for buildings and other structures, Revision of
ANSI/ASCE.
ASCE. (2005). Report card for America's infrastructure.
Aschauer, D. A. (1989). Is Public Expenditure Productive? Journal of Monetary Economics,
23, 177-200.
Ayres, R. U., van den Bergh, J. C. J. M., and Gowdy, J. M. (1998). Viewpoint: Weak versus
strong sustainability. Tinbergen Institute Discussion papers.
Baker, J. W., Schubert, M., and Faber, M. H. (2007). On the assessment of robustness.
Structural Safety, In Press, Corrected Proof.
Bangso, O., Flores, M. J., and Jensen, F. V. (2003). Plug & Play OOBNs. Lecture Notes in
Artificial Intelligence, 3040, 457-467.
Bangso, O., and Olesen, K. G. (2003). Applying Object Oriented Bayesian Networks to Large
(Medical) Decision Support Systems. Proceedings of 8th Scandinavian Conference on
Artificial Intelligence, SCAI'03, Bergen, Norway.
Barro, R. J., and Sala-i-Martin, X. (2004). Economic Growth Second Edition, The MIT Press,
Cambridge, Massachusetts.
Bayer, S. (2003). Generation-adjusted discounting in long-term decision-making. International
Journal of sustainable development, 6(1), 133-149.
Bayer, S., and Cansier, D. (1999). Intergenerational Discounting: A New Approach. Journal of
International Planning Literature, 14(3), 301-325.
Bayraktarli, Y., and Faber, M. H. (2007). Bayesian network approach for managing earthquake
risks. International Forum on Engineering Decision Making, IFED3, Shoal Bay, Australia.
Benjamin, J. R., and Cornell, C. A. (1970). Probability, Statistics and Decision for Civil
Engineers, McGraw-Hill, New York.
Bobbio, A., Ciancamerla, E., Franceschinis, G., Gaeta, R., Minichino, M., and Portinale, L.
(2003). Sequential application of heterogeneous models for the safety analysis of a control
system: a case study. Reliability Engineering & System Safety, 81(3), 269-280.
Bobbio, A., Portinale, L., Minichino, M., and Ciancamerla, E. (2001). Improving the analysis
of dependable systems by mapping fault trees into Bayesian networks. Reliability
Engineering and System Safety, 71(3), 249-260.
Brundtland, G. H. (1987). Our Common Future, Oxford University Press.
Bulow, J. I., and Summers, L. H. (1984). The Taxation of Risky Assets. The Journal of
Political Economy, 92(1), 20-39.
Canning, D. (1998). A Database of World Infrastructure Stocks, 1950-95. World Bank Policy
Research Working Paper, 1929, World Bank.
Canning, D. (1999). Infrastructure's contribution to aggregate output. World Bank Policy
Research Working Paper, World Bank.
Canning, D., and Bennathan, E. (2000). The Social Rate of Return on Infrastructure
Investments. World Bank Policy Research Working Paper, 2390, World Bank.
Cass, D. (1965). Optimum Growth in an Aggregative Models of Capital Accumulation. Review
of Economic Studies, 32, 233-240.
Clark, J. S., and Gelfand, A. E. (2006). Hierarchical Modelling for the Environmental
Sciences, Oxford University Press, New York.
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values, Springer-Verlag,
London.
Coles, S., Pericchi, L. R., and Sisson, S. (2003). A fully probabilistic approach to extreme
rainfall modeling. Journal of Hydrology, 273, 35-50.
Cornell, A. (1969). A Probability-Based Structural Code. ACI Journal, 974-985.
-132-
References
Dasgupta, P. S., and Heal, G. M. (1974). The optimal depletion of exhaustible resources.
Review of Economic Studies, 41, 3-28.
De Groot, H. (1970). Optimal Statistical Decisions, John Wiley&Sons, Inc.
Der Kiureghian, A., and Ditlevsen, O. (2007). Aleatory or epistemic? Does it matter? Special
workshop on risk acceptance and risk communication, Stanford University, California,
USA.
Der Kiureghian, A., and Ditlevsen, O. (2008). Aleatory or epistemic? Does it matter?
Structural Safety, available online.
Der Kiureghian, A., Haukaas, T., and Fujimura, K. (2006). Structural reliability software at the
University of California, Berkeley. Structural Safety, 28(1-2), 44-67.
Der Kiureghian, A., and Moghtaderi-Zadeh, M. (1982). An integrated approach to the
reliability of engineering systems. Nuclear Engineering and Design, 71(3), 349-354.
Der Kiureghian, A., and Song, J. (2008). Multi-scale reliability analysis and updating of
complex systems by use of linear programming. Reliability Engineering & System Safety,
93(2), 288-297.
Ditlevsen, O. (2004). Life quality index revisited. Structural Safety, 26(4), 443-451.
Ditlevsen, O., and Bjerager, P. (1986). Methods of structural systems reliability. Structural
Safety, 3(3-4), 195-229.
Ditlevsen, O., and Friis-Hansen, P. (2008). Cost and benefit including value of life, health and
environmental damage measured in time units. Structural Safety, In Press, Corrected Proof.
Ditlevsen, O., and Madsen, H. O. (2005). Structural Reliability Methods.
Duffy-Deno, K. T., and Eberts, R. W. (1991). Public infrastructure and regional economic
development: A simultaneous equations approach. Journal of Urban Economics, 30,
329-343.
Duggal, V. G., Saltzman, C., and Klein, L. R. (1999). Infrastructure and productivity: a
nonlinear approach. Journal of Econometrics, 92, 47-74.
DuraCrete. (2000). Statistical Quantification of the Variables in the Limit State Functions.
Final DuraCrete report, European Union.
Easterly, W., and Rebelo, S. (1993). Fiscal policy and economic growth. Journal of Monetary
Economics, 32(3), 417-458.
Engle, R. F., and Granger, C. W. J. (1987). Co-integration and error correction: Representation,
estimation, and testing. Econometrica, 55(2), 251-276.
EUROCONSTRUCT. (2007). European construction market trends to 2010, Country report.
Evans, A. W., and Verlander, N. Q. (1997). What Is Wrong with Criterion FN-Lines for
Judging the Tolerability of Risk? Risk Analysis, 17(2), 157-168.
Faber, M. H. (2003). Uncertainty Modeling and Probabilities in Engineering Decision
Analysis. Proceedings of the 22nd International Conference on Offshore Mechanics and
Arctic Engineering, OMAE2003, Cancun, Mexico.
Faber, M. H., Bayraktarli, Y., and Nishijima, K. (2007a). Recent Developments in the
Management of Risks Due to Large Scale Natural Hazards. XVI Congreso Nacional
Ingenieria Sismica, Ixtapa-Zihuatanejo, Mexico.
Faber, M. H., Engelund, S., Sørensen, J. D., and Bloch, A. (2000). Simplified and Generic Risk
Based Inspection Planning. Proceedings OMAE2000, 19th Conference on Offshore
Mechanics and Arctic Engineering, New Orleans, Louisiana, USA,
[OMAE2000/S&R6143].
Faber, M. H., Maes, M., Baker, J., Vrouwenvelder, T., and Takada, T. (2007b). Principles of
risk assessment of engineered systems. ICASP 10, Tokyo, Japan.
Faber, M. H., and Maes, M. A. (2003). Modeling of Risk Perception in Engineering Decision
Analysis. Proceedings of 11th IFIP WG7.5 Working Conference on Reliability and
Optimization of Structural Systems, 113-122.
Faber, M. H., and Maes, M. A. (2005). Epistemic Uncertainties in Decision Making.
Proceedings of the 24th International Conference on Offshore Mechanics and Arctic
Engineering, OMAE2005, Halkidiki, Greece.
-133-
References
-134-
References
-135-
References
Nishijima, K., Straub, D., and Faber, M. H. (2005). The Effect of Changing Decision Makers
on the Optimal Service Life Design of Concrete Structures. Proceedings of the 4th
International Workshop on Life-Cycle Cost Analysis and Design of Civil Infrastructures
Systems, Cocoa Beach, Florida, 325-333.
Nishijima, K., Straub, D., and Faber, M. H. (2007). Inter-generational distribution of the
life-cycle cost of an engineering facility. Journal of Reliability of Structures and Materials,
3(1), 33-46.
Nordhaus, W. (2007). Critical Assumptions in the Stern Review on Climate Change. Science,
317, 201-202.
O'Hagan, A., and Oakley, J. E. (2004). Probability is perfect, but we can't elicit it perfectly.
Reliability Engineering & System Safety, 85, 239-248.
Pandey, M. D., Nathwani, J. S., and Lind, N. C. (2006). The derivation and calibration of the
life-quality index (LQI) from economic principles. Structural Safety, 28(4), 341-360.
Pate-Cornell, M. E. (1996). Uncertainties in risk analysis: Six levels of treatment. Reliability
Engineering & System Safety, 54, 95-111.
Perman, R., Ma, Y., McGilvray, J., and Common, M. (2003). Natural Resource and
Environmental Economics (3rd edition), Pearson Education Limited, Harlow Essex UK.
Pezzey, J. C. V. (1992). Sustainability: an interdisciplinary guide. Environmental Values, 1,
321-362.
Pezzey, J. C. V. (1997). Sustainability constraints versus optimality versus intertemporal
concern, and axioms versus data. Land Economics, 73(4), 448-466.
Pezzey, J. C. V., and Withagen, C. A. (1998). The rise, fall and sustainability of
capital-resource economics. Scandinavian Journal of Economics, 100(2), 513-527.
Press, W. H., Flannery, B. P., Teukolsky, S. A., and Vetterling, W. T. (1988). Numerical Recipes
in C, Cambridge University Press.
Price, C. (1993). Time, Discounting & Values, Blackwell Publishers, Oxford, United Kingdom.
Rackwitz, R. (2000). Optimization - the basis of code-making and reliability verification.
Structural Safety, 22(1), 27-60.
Rackwitz, R. (2002). Optimization and Risk Acceptability Based on the Life Quality Index.
Structural Safety, 24(2-4), 297-332.
Rackwitz, R. (2003). Acceptable Risks and Affordable Risk Control for Technical Facilities
and Optimization. Reliability Engineering and Systems Safety.
Rackwitz, R., Lentz, A., and Faber, M. (2005). Socio-economically sustainable civil
engineering infrastructures by optimization. Structural Safety, 27(3), 187-229.
Raiffa, H., and Schlaifer, R. (1961). Applied Statistical Decision Theory, Cambridge
University Press, Cambridge.
Ramsey, F. (1928). A mathematical theory for saving. Economic Journal, 38, 543-559.
Raudenbush, S., and Bryk, A. S. (1986). A Hierarchical Model for Studying School Effects.
Sociology of Education, 59(1), 1-17.
Rioja, F. K. (2003). Filling potholes: macroeconomic effects of maintenance versus new
investments in public infrastructure. Journal of Public Economics, 87, 2281-2304.
Romer, P. M. (1986). Increasing returns and long-run growth. Journal of Political Economy,
94, 1002-1037.
Rosenblueth, E., and Mendoza, E. (1971). Reliability Optimization in Isostatic Structures.
Journal of the Engineering Mechanics Division, 97(6), 1625-1648.
Royset, J. O., Der Kiureghian, A., and Polak, E. (2003). Reliability-based optional design:
problem formulations, algorithms and application. Proceedings of the 11th IFIP WG7.5
Working conference on reliability and optimization of structural systems, Banff, Canada,
1-12.
Salazar, D., Rocco, C. M., and Galvan, B. J. (2006). Optimization of constrained
multiple-objective reliability problems using evolutionary algorithms. Reliability
Engineering & System Safety, 91(9), 1057-1070.
Skjong, R., and Ronold, K. (1998). Societal Indicators and Risk Acceptance. 17th
International Conference on Offshore Mechanics and Arctic Engineering, Lisbon, Portugal.
-136-
References
Smyth, P. (1997). Belief networks, hidden Markov models, and Markov random fields: A
unifying view. Pattern Recognition Letters, 18(11-13), 1261-1268.
Solow, R. M. (1956). A contribution to the theory of economic growth. Quarterly Journal of
Economics, 70, 65-94.
Solow, R. M. (1974). Intergenerational equity and exhaustible resources. Review of Economic
Studies, 41(22-46).
Solow, R. M. (1986). On the intergenerational allocation of natural resources. Scandinavian
Journal of Economics, 88(1), 141-149.
Song, J., and Kang, W.-H. (2008). System reliability and sensitivity under statistical
dependence by matrix-based system reliability method. Structural Safety, In Press,
Corrected Proof.
Stern, N. (2006). Stern Review: The Economics of Climate Change. HM Treasury.
Stern, N., and Taylor, C. (2007). Climate Change: Risk, Ethics, and the Stern Review. Science,
317, 203-204.
Stiglitz, J. E. (1974). Growth with exhaustible natural resources: efficient and optimal growth
path. Review of Economic Studies, 41, 123-137.
Straub, D. (2004). Generic approaches to risk based inspection planning for steel structures,
PhD thesis, ETH Zurich, Zurich.
Straub, D., and Der Kiureghian, A. (2008). Improved seismic fragility modeling from
empirical data. Structural Safety, 30(4), 320-336.
Straub, D., and Faber, M. H. (2005). Risk based inspection planning for structural systems.
Structural Safety, 27(4), 335-355.
Straub, D., and Faber, M. H. (2006). Computational Aspects of Risk-Based Inspection
Planning. Computer-Aided Civil and Infrastructure Engineering, 21(3), 179-192.
Swan, T. W. (1956). Economic growth and capital accumulation. Economic Record, 32,
334-361.
Tang, W. H. (1973). Probabilistic Updating of Flaw Information. Journal oft Testing and
Evaluation, 1, 459-467.
Thoft-Christensen, P., and Sørensen, J. D. (1987). Optimal Strategy for Inspection and Repair
of Structural Systems. Civil Engineering Systems, 4, 94-100.
Turner, R. K. (1992). Speculations on Weak and Strong Sustainability. Centre for Social and
Economic Research on the Global Environment (CSERGE).
USNRC. (1975). Reactor Safety Study - An Assessment of Accident Risks in U.S. Commercial
Nuclear Power Plants, WASH-1400 (NUREG-75/014). U.S. Nuclear Regulatory
Commission.
USNRC. (1990). Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants
(NUREG-1150). U.S. Nuclear Regulatory Commission.
Valente, S. (2005). Sustainable development, renewable resources and technological progress.
Environmental and Resource Economics, 30(1), 1573-1502.
Vanmarcke, E. (1983). Random Fields: Analysis and Synthesis, MIT Press, Cambridge,
Massachusetts.
Vesely, W. E., Goldberg, F. F., Roberts, N. H., and Haasl, D. F. (1981). Fault Tree Handbook
(NUREG-0492). U.S. Nuclear Regulatory Commission.
Volovoi, V. (2004). Modeling of system reliability Petri nets with aging tokens. Reliability
Engineering & System Safety, 84(2), 149-161.
Wen, Y. K., Ellingwood, B. R., Veneziano, D., and Bracci, J. (2003). Uncertainty Modeling in
Earthquake Engineering. MAE Center Project FD-2 Report.
Wierzbicky, W. (1936). La sécurité des constructions comme un problème de probabilité.
Annales de l'academie des sciences techniques, 7, 63-74.
Withagen, C. A. A. M. (1996). Sustainability and investment rules. Economic Letters, 53, 1-6.
World Bank. (1994). World development report 1994: infrastructure for development. World
Bank.
Zeira, J. (1987). Risk and Capital Accumulation in a Small Open Economy. The Quarterly
Journal of Economics, 102(2), 265-280.
-137-
Curriculum vitae
Curriculum vitae
PERSONAL DETAILS
E-mail nishijima@ibk.baug.ethz.ch
Citizenship Japan
EDUCATION
-138-
Curriculum vitae
PROFILE
Awards
Scholarship
2003-2004 Research fellowship for young scientists of the Japan Society for
the Promotion of Science (JSCS), DC1
Language
Japanese Native
German Good
-139-