Ec Hard 2011
Ec Hard 2011
Ec Hard 2011
Structural Safety
journal homepage: www.elsevier.com/locate/strusafe
a r t i c l e i n f o a b s t r a c t
Article history: An important challenge in structural reliability is to keep to a minimum the number of calls to the numer-
Received 12 March 2010 ical models. Engineering problems involve more and more complex computer codes and the evaluation of
Accepted 25 January 2011 the probability of failure may require very time-consuming computations. Metamodels are used to
Available online 25 February 2011
reduce these computation times. To assess reliability, the most popular approach remains the numerous
variants of response surfaces. Polynomial Chaos [1] and Support Vector Machine [2] are also possibilities
Keywords: and have gained considerations among researchers in the last decades. However, recently, Kriging, orig-
Reliability
inated from geostatistics, have emerged in reliability analysis. Widespread in optimisation, Kriging has
Metamodel
Kriging
just started to appear in uncertainty propagation [3] and reliability [4,5] studies. It presents interesting
Active learning characteristics such as exact interpolation and a local index of uncertainty on the prediction which can
Monte Carlo be used in active learning methods. The aim of this paper is to propose an iterative approach based on
Failure probability Monte Carlo Simulation and Kriging metamodel to assess the reliability of structures in a more efficient
way. The method is called AK-MCS for Active learning reliability method combining Kriging and Monte
Carlo Simulation. It is shown to be very efficient as the probability of failure obtained with AK-MCS is
very accurate and this, for only a small number of calls to the performance function. Several examples
from literature are performed to illustrate the methodology and to prove its efficiency particularly for
problems dealing with high non-linearity, non-differentiability, non-convex and non-connex domains
of failure and high dimensionality.
Ó 2011 Elsevier Ltd. All rights reserved.
0167-4730/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.
doi:10.1016/j.strusafe.2011.01.002
146 B. Echard et al. / Structural Safety 33 (2011) 145–154
for very weak probabilities of failure, yet they cannot be applied to samples (n being the number of random variables) in the initial
all problems (especially with high non-linear or complex limit design of experiments defined by using Latin Hypercube sampling
states in the standard Gaussian space). This led the path to the over the bounds ± five standard deviations. From this design, the
use of metamodels. These models aim at approximating the Kriging model is constructed and the point in space which maxi-
performance function with a strategic design of experiments mises the learning function EFF (learning criterion) is searched
(DoE) to obtain, for a less expensive computational demand, a suf- using the DIRECT global optimisation algorithm [17] (this algorithm
ficiently accurate prediction of the performance function’s sign. Re- is preferred to the BRANCH-AND-BOUND [18] one used in the Kriging
sponse surfaces are the most popular methods. They are used for contour estimation method [16] which they see as too expensive).
their speed but they are limited in global interpolation accuracy. This point is then evaluated on the performance function and
To avoid this problem, Polynomial Chaos can be used [1,8]. It cor- added to the design of experiments. This is done until some stop-
responds to a response surface in a particular base (Hermite or ping condition is satisfied. The surrogate model is then considered
other polynomials). However, the definitions of the design of accurate enough to calculate the probability of failure using
experiments and of the polynomial’s degrees are tricky [9]. Fur- Importance Sampling (IS). The results show that the method is
thermore, the accuracy evaluation requires cross validation. More really efficient. However, it can be noticed that in high dimensions
recently, Support Vector Machine methods have also gained con- problems, the size of the initial design of experiments is extre-
siderations with their great effectiveness for reliability studies mely important. Furthermore, the method approximates the limit
[2,10]. state in the whole design space and therefore, even in regions,
Aside from these metamodels, a stochastic approach has been where configurations show very weak densities of probability
intensively investigated: Kriging. Developed for geostatistics in and have negligible effects on the probability of failure.
the fifties and sixties by Krige and then by Matheron [11], this To avoid that, this paper proposes to develop an Active learn-
method gained considerations in the computer experiments field ing method which combines Kriging and Monte Carlo Simulation
in the eighties. It presents several but interesting differences with to separate predicted negative and positive G b values of a Monte
the other metamodels. First, Kriging is an exact interpolation Carlo population. The idea is to perform a Monte Carlo Simulation
method, i.e. the prediction in a point belonging to the design of without evaluating the whole population on G. The sign in each
experiments is the exact value of the performance function in this point is obtained thanks to the predictions thanks to a Kriging
point. Furthermore, thanks to its stochastic propriety, Kriging pro- model based on a few evaluated points. The first stage of the
vides not only predicted values in any points, but also estimations method consists of generating a Monte Carlo population. An initial
of the local variance of the predictions. This variance defines the lo- design of experiments of very small size is selected among this
cal uncertainty on the prediction and is called the Kriging variance. population and a learning function is computed for all the points.
It is understood in the way that the higher the variance, the less The best next point to include in the design of experiments is se-
certain the prediction. Thanks to this variance, Kriging has been lected by this function. The probability of failure is estimated at
intensively used in optimisation problems in the nineties with ac- each step thanks to the Kriging predictions of the Monte Carlo
tive learning methods such as Efficient Global Optimisation (EGO) population. This method enables to evaluate points near the limit
[12]. Active learning means that the Kriging model is updated by state to improve the accuracy of the metamodel and above all, it
adding a new point to the design of experiments, this point being enables to focus only on the points having sufficiently high densi-
selected for its expected improvement on the Kriging model. In this ties of probability to have a significant impact on the probability
domain, EGO was a major step forward. of failure. The approximation of the limit state is then very accu-
The applications of Kriging to structural reliability problems rate in the Monte Carlo population. This method gives at the same
are rather recent. The precursor work seems to have been pro- time a Kriging estimation of the probability of failure and its coef-
posed by Romero et al. [3] who compare Kriging with polynomial ficient of variation without requiring the expensive evaluation of
regression and finite-element interpolation on progressive lattice- the whole Monte Carlo population. It is named AK-MCS as it is
samplings with analytical functions. The results show that none of an Active learning method combining Kriging and Monte Carlo
the methods is more efficient than the others. Following this, Simulation. It must be seen as a modification of the crude Monte
Kaymaz [4] proposes a method to perform structural reliability Carlo Simulation.
analysis and compares it to classic response surface methods. This article is framed in four sections. Kriging theory is pre-
His method is based on the MATLAB toolbox DACE [13,14] and sented Section 2 to show the real interest in using it in computer
consists of finding the design point. His conclusions are that codes. Following this, the proposed method (AK-MCS) is detailed
Kriging metamodel does not greatly improve the reliability results Section 3 as a solution to the current existing methodologies’ prob-
compared to quadratic response surface methods’ results, unless lems. Its concept is defined as different from previous works.
Kriging parameters are well chosen. Following this, structural reli- Section 4 proposes several examples to validate AK-MCS. It is
ability problems are also investigated using Kriging in [15]. These compared to Monte Carlo Simulation and results of the literature.
methods are based on progressive design of experiments but they AK-MCS is shown to be very efficient as the number of calls to
cannot be defined as active learning methodologies. Indeed, the the performance function is relatively weak compared to the other
design of experiments is not improved by learning from all the approaches.
data supplied by Kriging, i.e. Kriging prediction and variance. This
was noticed by Bichon et al. [5] who propose a fully active learn-
ing method to perform reliability. Inspired by EGO and the Kriging 2. Kriging theory: a reminder
contour estimation method of Ranjan et al. [16], this method is
called Efficient Global Reliability Analysis (EGRA). The active Kriging is based on the idea that the performance function G(x)
learning approach is defined thanks to a learning function called is seen as the realisation of a stochastic field GðxÞ [11]. The first
the Expected Feasibility Function (EFF) which provides an indica- step of Kriging consists of defining this stochastic field with its
tion of how well the true value of the performance function in a parameters according to a design of experiments. Then, the Best
point can be expected to satisfy the constraint G(x) = 0. EFF is ex- Linear Unbiased Predictor (BLUP) is used to estimate the value in
pressed as a function of the Kriging data (local prediction and lo- a given point. The model [19] for GðxÞ is given as:
cal variance). More information about it is given Section 3.3. This
method consists of a sequential algorithm starting with ðnþ1Þðnþ2Þ GðxÞ ¼ Fðx; bÞ þ zðxÞ ð3Þ
2
B. Echard et al. / Structural Safety 33 (2011) 145–154 147
b
EFFðxÞ ¼ ð GðxÞ
It indicates the distance in Kriging standard deviations between
" ! ! !# the prediction and the estimated limit state. It represents a reliabil-
b
a GðxÞ b
ða Þ GðxÞ b
ða þ Þ GðxÞ
aÞ 2U U U ity index on the risk of making a mistake on the sign of G(x) con-
rbG ðxÞ rbG ðxÞ rbG ðxÞ sidering G(x) with the same sign than GðxÞ. b This index U(x) can
" ! ! !#
b
a GðxÞ b
ða Þ GðxÞ b
ða þ Þ GðxÞ be related to the lower confidence bounding (lcb) function pro-
rb ðxÞ 2/ / / posed by Cox and John [23] for optimisation. In fact, they define
G rbG ðxÞ rbG ðxÞ rbG ðxÞ
" ! !# lcb as:
b
ða þ Þ GðxÞ b
ða Þ GðxÞ
þ U U
rbG ðxÞ rbG ðxÞ b
lcbðxÞ ¼ GðxÞ b rbG ðxÞ ð16Þ
ð14Þ
First, they select b as equal to 2 or 2.5. This choice depends on
where U is the standard normal cumulative distribution function the wish of a local improvement b = 2 or a more global one
and / the standard normal density function. b = 2.5. Then, lcb is computed and minimised. Here, it works the
In the case of reliability, the threshold a is 0. In EGRA, the opposite way as it is a reliability problem and not an optimisation
expected feasibility function is built with ¼ 2r2 . The same is one. lcb is not computed as only the sign of the prediction matters,
b
G
selected for AK-MCS+EFF. therefore it is defined as lcb = 0. Then, U(x) which is equivalent to b
150 B. Echard et al. / Structural Safety 33 (2011) 145–154
Table 3
4. AK-MCS academic validation Example 1, k = 7, reliability results – comparison of AK-MCS (population size:
nMC = 106) with a Monte Carlo Simulation on the same population and with
AK-MCS efficiency is illustrated on several examples which cov- metamodels from [27]. Legend: Ncall, number of calls to the performance function;
c
P f , the probability of failure; C:O:V b , its coefficient of variation for the Monte Carlo
er a wide variety of limit states: first, examples of dimension 2 are Pf
Simulation; b, the corresponding reliability index and b, its percentage error in
tested to observe the method’s behaviour. They were selected for comparison with the reference reliability index (the Monte Carlo Simulation result for
their high non-linearity and rather complex limit state. Following AK-MCS and the reference value mentioned in [27] for the other metamodels). If the
this, a non-linear analytical function with a moderate dimension predicted c
Pf and b are similar to the reference value, ⁄ is mentioned in the last
(<10) is tested. It corresponds to the dynamic response of a non- column.
linear oscillator. To finish, the method is performed on a high Method Ncall c
P f ðC:O:V b Þ b b (%)
Pf
dimension (40, 100) analytical example. These examples are com-
pared to crude Monte Carlo Simulation and results from literature. Monte Carlo 106 2.233 103 (2.1%) 2.843 –
AK-MCS+U 96 2.233 103 2.843 ⁄
AK-MCS+EFF 101 2.232 103 2.843 <104
4.1. Example 1: series system with four branches
Directional Sampling (DS) 9192 2.6 103 2.79 2.11
DS + Response Surface 830 1.0 103 3.03 6.32
The first example consists of a series system with four branches DS + Spline 107 1.0 103 3.00 5.26
which has been proposed in [25–27]. The random variables xi are DS + Neural Network 67 1.0 103 3.05 7.01
standard normal distributed random variables. The performance DS + Ordinary Kriging 107 1.5 103 2.95 3.51
function reads: Importance Sampling (IS) 4750 2.2 103 2.84 0.35
8 9 IS + Response Surface 3877 2.0 103 2.84 0.35
>
> 3 þ 0:1ðx1 x2 Þ2 ðx1pþxffiffi2 2 Þ ; > > IS + Spline 724 2.0 103 2.81 1.40
< = IS + Neural Network 125 2.9 103 2.76 3.16
Gðx1 ; x2 Þ ¼ min 3 þ 0:1ðx1 x2 Þ2 þ ðx1pþxffiffi2 2 Þ ; ð17Þ IS + Ordinary Kriging 146 2.0 103 2.88 1.05
>
> >
>
: ;
ðx1 x2 Þ þ pkffiffi2 ; ðx2 x1 Þ þ pkffiffi2
with k taking two values (6 then 7) to fit literature examples. the nearly radial shape of the limit state is an adequate problem
AK-MCS is compared with numerous metamodels proposed in to solve with crude directional sampling [26]. About IS+NN, it must
[26,27]. The results are summarised Tables 2 and 3. The informa- be seen that the reliability index is not very accurate. An error of 3%
tion is given as the number of calls to the performance function is obtained in comparison to the reference reliability index [26].
Ncall, the number of points nMC in the Monte Carlo population S, The same remark can be made for the second application with
the probability of failure, the corresponding reliability index and k = 7 (Table 3), where Directional Sampling + Neural Network
the error percentage compared to the reference reliability index. (DS+NN) needs less calls but remains less accurate (7% error on
The proposed method called AK-MCS is tested with the two differ- the reliability index). A method with Ordinary Kriging (OK) is also
ent learning functions defined previously (U and EFF) on the same proposed in [27] for k = 7. This method is found to require more
Monte Carlo population and initial design of experiments. calls to the performance function than AK-MCS and its accuracy re-
mains very poor too.
4.1.1. Reliability results AK-MCS shows great effectiveness and it must be noticed that
First, the results (Tables 2 and 3) show that the AK-MCS is more AK-MCS+U gives a prediction of the probability of failure similar
efficient than the other metamodels found in literature. Indeed, the to the one obtained by Monte Carlo Simulation on the true perfor-
number of calls to the performance function is lower than most of mance function for the same population for both k. This can be
the metamodels proposed in [26,27] and the prediction of the linked to Fig. 2 which shows that an accurate separator is defined
probability of failure is very accurate. For k = 6 (Table 2), only crude in regions, where Monte Carlo sampling points are situated.
Directional Sampling and Importance Sampling + Neural Network Locations with extremely weak densities of probability are
(IS+NN) require less calls to the performance function. However, badly approximated as they do not present any interest in the
B. Echard et al. / Structural Safety 33 (2011) 145–154 151
failure is similar to the one estimated by Monte Carlo Simulation The method is compared Table 4 with Monte Carlo Simulation,
on the performance function. AK-MCS+EFF is found to give a Subset Simulations [29] performed with Phimeca-Soft [30] and a
slightly less accurate probability than AK-MCS+U for approxi- passive Kriging metamodel, that is based on a fixed Latin Hyper-
mately the same number of calls to the performance function. To cube design of experiments which size is equal to the number of
improve accuracy, the stopping condition has to be more restric- calls required by AK-MCS+U.
tive but this would require more evaluations of G. By plotting the This example shows that AK-MCS can be used on highly non-
evolution of the probability of failure in terms of the number of linear limit states and on problems involving non-convex and
calls to the performance function for k = 7 (Fig. 3), it can be seen non-connex domains of failure. Its performance is incomparable
that AK-MCS+EFF converges earlier towards the right probability. to Subset Simulation or even to passive Kriging combined with
AK-MCS+U is slower to converge but the fastest to satisfy its stop- Latin Hypercube sampling. Indeed, the probability of failure esti-
ping condition. Four levels (one for each branch) can be observed in mated by AK-MCS with only 400 calls is found to be the same than
its evolution Fig. 3. These levels are coming from the way the learn- the one obtained by Monte Carlo Simulation for the same popula-
ing criterion works. Indeed, as the limit state has 4 different tion (Table 4). Furthermore, in this case, AK-MCS+U and
branches, the learning criterion of U focuses on one of them first. AK-MCS+EFF perform the same. However, it must be noticed
Once the metamodel is accurate enough in this region, the learning Fig. 6 that AK-MCS+EFF gives an earlier convergence than
criterion goes to another branch and carries on. AK-MCS+U. Fig. 7 shows the design of experiments required by
AK-MCS+U to satisfy the stopping condition and the similar prob-
ability of failure than the one estimated by a classic Monte Carlo
Simulation on the same population. It is seen that the points are
Table 4
Example 2, reliability results – comparison of AK-MCS (population size:
nMC = 6 104) with a Monte Carlo Simulation on the same population, with passive
Kriging and Latin Hypercube Sampling and Subset Simulation using Phimeca-Soft
[30]. Legend: Ncall, number of calls to the performance function (expensive); c
Pf , the
probability of failure; C:O:V b , its coefficient of variation for the Monte Carlo
Pf
Simulation; b, the corresponding reliability index and b, its percentage error in
comparison with the reference reliability index (Monte Carlo Simulation result). If the
predicted c
Pf and b are similar to the reference value, ⁄ is mentioned in the last
column.
Method Ncall c
P f ðC:O:V b Þ b b (%)
Pf
Table 5
Example 3, random variables [26].
Table 6
Example 3, reliability results – comparison of AK-MCS (population size:
nMC = 7 104) with a Monte Carlo Simulation on the same population and with
metamodels from [26]. Legend: Ncall, number of calls to the performance function; c
Pf ,
the probability of failure; C:O:V b , its coefficient of variation for the Monte Carlo
Pf
Simulation; b, the corresponding reliability index and b, its percentage error in
comparison with the reference reliability index (the Monte Carlo Simulation result for
AK-MCS and the reference value mentioned in [26] for the other metamodels). If the
predicted c
Pf and b are similar to the reference value, ⁄ is mentioned in the last
column.
Fig. 11. Example 4, evolution of c P f normalised by Pf obtained by Monte Carlo
Method Ncall c
P f ðC:O:V b Þ b b (%) Simulation in terms of Ncall for n = 40.
Pf
Monte Carlo 7 104 2.834 102 (2.2%) 1.906 – For 40 random variables, the probability of failure starts to con-
AK-MCS+U 58 2.834 102 1.906 ⁄
verge around 50 calls to the performance function. AK-MCS+EFF
AK-MCS+EFF 45 2.851 102 1.903 0.16
and AK-MCS+U show the same behaviour in this example and none
Directional Sampling (DS) 1281 3.5 102 1.81 5.24
seems to give a faster convergence (Fig. 11). The initial design of
DS + Response Surface 62 3.4 102 1.82 4.71
DS + Spline 76 3.4 102 1.83 4.19 experiments of small size is found to perform well. Indeed, all
DS + Neural Network 86 2.8 102 1.91 ⁄ efforts are concentrated on adding interesting points rather than
Importance Sampling (IS) 6144 2.7 102 1.93 1.04 supplying the metamodel with a large initial design of experiments
IS + Response Surface 109 2.5 102 1.96 2.62 to cover space like in [5]. Furthermore, the use of a Monte Carlo
IS + Spline 67 2.7 102 1.93 1.04 population enables to reduce the number of points to predict. In
IS + Neural Network 68 3.1 102 1.87 2.01
fact, to approximate sufficiently the performance function with
154 B. Echard et al. / Structural Safety 33 (2011) 145–154
the other methods presented before, a large number of predictions Probabiliste pour la conception Robuste en Fatigue), which is
is needed to cover space. Here, once again, the use of a Monte Carlo gratefully acknowledged by the authors.
population is very relevant as it focuses only on configurations
with sufficiently high densities of probability. References