OpenQuake Manual 3.6
OpenQuake Manual 3.6
OpenQuake Manual 3.6
OPENQUAKE
User
Instructions
hazard
Science
OPENQUAKE ENGINE
User Instruction Manual
Version 3.6.0
risk
SciencE
Hands-on-instructions on the
different types of calculations
you can carry out with the
OpenQuake Engine software
The OpenQuake-engine
User Instruction Manual
globalquakemodel.org/openquake
Authors
Marco Pagani1 , Vitor Silva1 , Anirudh Rao1 , Michele Simionato1 , Robin Gee1
1 2 3
GEM Foundation EUCENTRE GFZ
via Ferrata, 1 via Ferrata, 1 Helmholtzstraße 6/7
20133 Pavia 20133 Pavia 14467 Potsdam
Italy Italy Germany
Citation
Please cite this document as:
GEM (2019). The OpenQuake-engine User Manual. Global Earthquake Model (GEM) Open-
Quake Manual for Engine version 3.6.0.
doi: 10.13117/GEM.OPENQUAKE.MAN.ENGINE.3.6.0, 189 pages.
Disclaimer
The OpenQuake-engine User Manual is distributed in the hope that it will be useful, but
without any warranty: without even the implied warranty of merchantability or fitness
for a particular purpose. While every precaution has been taken in the preparation of this
document, in no event shall the authors of the Manual and the GEM Foundation be liable to
any party for direct, indirect, special, incidental, or consequential damages, including lost
profits, arising out of the use of information contained in this document or from the use of
programs and source code that may accompany it, even if the authors and GEM Foundation
have been advised of the possibility of such damage. The Manual provided hereunder is
on as “as is” basis, and the authors and GEM Foundation have no obligations to provide
maintenance, support, updates, enhancements, or modifications.
License
This Manual is distributed under the Creative Commons License Attribution- NonCommercial-
ShareAlike 4.0 International (CC BY-NC-SA 4.0). You can download this Manual and share it
with others as long as you provide proper credit, but you cannot change it in any way or use
it commercially.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
I Introduction 9
1 OpenQuake-engine Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
II Hazard 15
III Risk 97
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Preface
The goal of this manual is to provide a comprehensive and transparent description of the
features of the OpenQuake Engine. This manual is designed to be readable by someone
with basic understanding of Probabilistic Seismic Hazard and Risk Analysis, but no previous
knowledge of the OpenQuake-engine is assumed.
The OpenQuake-engine is an effort promoted and actively developed by the Global
Earthquake Model, a public-private partnership initiated by the Global Science Forum of the
Organisation for Economic Co-operation and Development (OECD)1 .
The OpenQuake-engine is the result of an effort carried out jointly by the Information
Technology and Scientific teams working at the Global Earthquake Model (GEM) Secretariat.
It is freely distributed under an Affero GPL license (http://www.gnu.org/licenses/agpl-
3.0.html).
1
A short description of the process promoted by OECD is available here:
http://www.oecd.org/science/sci-tech/theglobalearthquakemodelgem.htm
Part I
Introduction
1. OpenQuake-engine Background
OpenQuake-engine is the seismic hazard and risk calculation software developed by the
Global Earthquake Model. By following current standards in software developments like test-
driven development and continuous integration, the OpenQuake-engine aims at becoming
an open, and community-driven tool for seismic hazard and risk analysis.
The source code of the OpenQuake-engine is available on a public web-based repository
at the following address: http://github.com/gem/oq-engine.
The OpenQuake-engine is available for the Linux, macOS, and Windows platforms. It
can be installed in several different ways. The following page provides a handy guide for
users to choose the most appropriate installation method depending on their intended use
cases:
https://github.com/gem/oq-engine/blob/engine-3.6/doc/installing/overview.md.
An OpenQuake-engine (oq-engine) analysis is launched from the command line of a
terminal.
A schematic list of the options that can be used for the execution of the oq-engine can be
obtained with the following command:
[--run CONFIG_FILE]
[--list-hazard-calculations]
[--list-risk-calculations]
[--delete-calculation CALCULATION_ID]
[--delete-uncompleted-calculations]
[--hazard-calculation-id HAZARD_CALCULATION_ID]
[--list-outputs CALCULATION_ID]
[--show-log CALCULATION_ID]
[--export-output OUTPUT_ID TARGET_DIR]
[--export-outputs CALCULATION_ID TARGET_DIR]
[-e]
[-l {debug, info, warn, error, critical}]
optional arguments:
-h, --help show this help message and exit
--log-file LOG_FILE, -L LOG_FILE
Location where to store log messages; if not
specified, log messages will be printed to the console
(to stderr)
--no-distribute, --nd
Disable calculation task distribution and run the
computation in a single process. This is intended for
use in debugging and profiling.
-y, --yes Automatically answer "yes" when asked to confirm an
action
-c CONFIG_FILE, --config-file CONFIG_FILE
Custom openquake.cfg file, to override default
configurations
--make-html-report YYYY-MM-DD|today, -r YYYY-MM-DD|today
Build an HTML report of the computation at the given
date
-u, --upgrade-db Upgrade the openquake database
-d, --db-version Show the current version of the openquake database
-w, --what-if-I-upgrade
Show what will happen to the openquake database if you
upgrade
--run CONFIG_FILE Run a job with the specified config file
--list-hazard-calculations, --lhc
List hazard calculation information
--list-risk-calculations, --lrc
List risk calculation information
--delete-calculation CALCULATION_ID, --dc CALCULATION_ID
Delete a calculation and all associated outputs
--delete-uncompleted-calculations, --duc
Delete all the uncompleted calculations
--hazard-calculation-id HAZARD_CALCULATION_ID, --hc HAZARD_CALCULATION_ID
Use the given job as input for the next job
--list-outputs CALCULATION_ID, --lo CALCULATION_ID
List outputs for the specified calculation
--show-log CALCULATION_ID, --sl CALCULATION_ID
Show the log of the specified calculation
--export-output OUTPUT_ID TARGET_DIR, --eo OUTPUT_ID TARGET_DIR
Export the desired output to the specified directory
--export-outputs CALCULATION_ID TARGET_DIR, --eos CALCULATION_ID TARGET_DIR
Export all of the calculation outputs to the specified
directory
-e , --exports Comma-separated string specifing the export formats,
in order of priority
-l {debug, info, warn, error, critical}, --log-level {debug, info, warn,
error, critical} Defaults to "info"
Part II
Hazard
Source typologies
Source typologies for modelling distributed seismicity
Fault sources with floating ruptures
Fault sources without floating ruptures
Non-Parametric Sources
Magnitude-frequency distributions
Magnitude-scaling relationships
Relationships for shallow earthquakes in active tectonic regions
Magnitude-scaling relationships for subduction earthquakes
Magnitude-scaling relationships stable continental regions
Miscellaneous Magnitude-Scaling Relationships
Calculation workflows
Classical Probabilistic Seismic Hazard Analysis
Event-Based Probabilistic Seismic Hazard Analysis
Scenario based Seismic Hazard Analysis
The hazard component of the OpenQuake-engine builds on top of the OpenQuake hazard
library (oq-hazardlib), a Python-based library containing tools for PSHA calculations.
The web repository of this library is available at the following address:
https://github.com/gem/oq-engine/tree/master/openquake/hazardlib.
In this section we briefly illustrate the main properties of the hazard component of the
OpenQuake-engine. In particular, we will describe the main typologies of sources supported
and the main calculation workflows available.
– Complex fault source - Often used to model subduction interface sources with a
complex geometry.
• Fault sources with ruptures always covering the entire fault surface:
– Characteristic fault source - A typology of source where ruptures always fill the
entire fault surface.
– Non-parametric source - A typology of source representing a collection of rup-
tures, each with their associated probabilities of 0, 1, 2 ... occurrences in the
investigation time
• Sources for representing individual earthquake ruptures
– Planar fault rupture - an individual fault rupture represented as a single rectan-
gular plane
– Multi-planar fault rupture - an individual rupture represented as a collection of
rectangular planes
– Simple fault rupture - an individual fault rupture represented as a simple fault
surface
– Complex fault rupture - an individual fault rupture represented as a complex
fault surface
The OpenQuake-engine contains some basic assumptions for the definition of these
source typologies:
• In the case of area and fault sources, the seismicity is homogeneously distributed over
the source;
• Seismicity temporal occurrence follows a Poissonian model.
The above sets of sources may be referred to as “parametric” sources, that is to say that
the generation of the Earthquake rupture forecast is done by the OpenQuake engine based
on the parameters of the sources set by the user. In some cases, particularly if the user wishes
for the temporal occurrence model to be non-Poissonian (such as the lognormal or Brownian
Passage Time models) a different type of behaviour is needed. For this OpenQuake-engine
supports a Non-parametric source in which the Earthquake rupture forecast is provided
explicitly by the user as a set of ruptures and their corresponding probabilities of occurrence.
Hypocenter
Source data
For the definition of a point source the following parameters are required (Figure 2.1 shows
some of the parameters described below, together with an example of the surface of a
generated rupture):
• The coordinates of the point (i.e. longitude and latitude) [decimal degrees]
• The upper and lower seismogenic depths [km]
• One magnitude-frequency distribution
• One magnitude-scaling relationship
• The rupture aspect ratio
• A distribution of nodal planes i.e. one (or several) instances of the following set of
parameters:
– strike [degrees]
– dip [degrees]
– rake [degrees]
• A magnitude independent depth distribution of hypocenters [km].
Figure 2.2 shows ruptures generated by a point source for a range of magnitudes. Each
rupture is centered on the single hypocentral position admitted by this point source. Ruptures
are created by conserving the area computed using the specified magnitude-area scaling
relatioship and the corresponding value of magnitude.
Below we provide the excerpt of an .xml file used to describe the properties of a point
source. Note that in this example, ruptures occur on two possible nodal planes and two
hypocentral depths. Figure 2.3 shows the ruptures generated by the point source.
A grid source is simply a collection of point sources distributed over a regular grid (usually
equally spaced in longitude and latitude). In probabilistic seismic hazard analysis a grid
24 Chapter 2. Introduction to the Hazard Module
Figure 2.2 – Point source with multiple ruptures. Note the change in the aspect ratio once the
rupture width fills the entire seismogenic layer.
Figure 2.3 – Ruptures produced by the source created using the information in the example .xml
file described on page 21.
source can be considered a model alternative to area sources, since they both model dis-
tributed seismicity. Grid sources are generally used to reproduce more faithfully the spatial
pattern of seismicity depicted by the earthquakes occurred in the past; in some models (e.g.
Petersen et al., (2008)) only events of low and intermediate magnitudes are considered.
They are frequently, though not always, computed using seismicity smoothing algorithms
(Frankel, 1995; Woo, 1996, amongst many others).
The use of smoothing algorithms to produce grid sources brings some advantages com-
pared to area sources, since (1) it removes most of the unavoidable degree of subjectivity
due to the definition of the geometries of the area sources and (2) it produces a spatial
pattern of seismicity that is usually closer to what observed in the reality. Nevertheless, in
many cases smoothing algorithms require an a-priori definition of some setup parameters
that expose the calculation to a certain degree of partiality.
Grid sources are modeled in oq-engine simply as a set of point sources; in other words, a
grid source is just a long list of point sources specified as described in the previous section
(see page 18).
2.1 Source typologies 25
Area sources are usually adopted to describe the seismicity occurring over wide areas where
the identification and characterization - i.e. the unambiguous definition of position, geometry
and seismicity occurrence parameters - of single fault structures is difficult.
From a computation standpoint, area sources are comparable to grid sources since they
are both represented in the engine by a list of point sources.
The oq-engine using the source data parameters (see below) creates an equally spaced in
distance grid of point sources where each point has the same seismicity occurrence properties
(i.e. rate of events generated).
Below we provide a brief description of the parameters necessary to completely describe
an area source.
Source data
• A polygon defining the external border of the area (i.e. a list of Longitude-Latitude
[degrees] tuples) The current version of the OQ-engine doesn’t support the definition
of internal borders.
• The upper and lower seismogenic depths [km]
• One magnitude-frequency distribution
• One magnitude-scaling relationship
26 Chapter 2. Introduction to the Hazard Module
Simple Faults are the most common source type used to model shallow faults; the “simple”
adjective relates to the geometry description of the source which is obtained by projecting
the fault trace (i.e. a polyline) along a characteristic dip direction.
The parameters used to create an instance of this source type are described in the
following paragraph.
Source data
• A horizontal fault trace (usually a polyline). It is a list of longitude-latitude tuples
[degrees].
• A frequency-magnitude distribution
• A magnitude-scaling relationship
• A representative value of the dip angle (specified following the Aki-Richards convention;
see Aki and Richards, (2002)) [degrees]
• Rake angle (specified following the Aki-Richards convention; see Aki and Richards,
(2002)) [degrees]
• Upper and lower depth values limiting the seismogenic interval [km]
For near-fault probabilistic seismic hazard analysis, two additional parameters are needed
for characterising seismic sources:
• A hypocentre list. It is a list of the possible hypocentral positions, and the corresponding
weights, e.g., alongStrike="0.25" downDip="0.25" weight="0.25". Each hypocentral
position is defined in relative terms using as a reference the upper left corner of the
rupture and by specifying the fraction of rupture length and rupture width.
• A slip list. It is a list of the possible rupture slip directions [degrees], and their
corresponding weights. The angle describing each slip direction is measured counter-
clockwise using the fault strike direction as reference.
In near-fault PSHA calculations, the hypocentre list and the slip list are mandatory. The
28 Chapter 2. Introduction to the Hazard Module
weights in each list must always sum to one. The available GMPE which currently supports the
near-fault directivity PSHA calculation in OQ- engine is the ChiouYoungs2014NearFaultEffect
GMPE developed by Chiou and Youngs, (2014) (associated with an Active Shallow
Crust tectonic region type).
We provide two examples of simple fault source files. The first is an excerpt of an xml
file used to describe the properties of a simple fault source and the second example shows
the excerpt of an xml file used to describe the properties of a simple fault source that can be
used to perform a PSHA calculation taking into account directivity effects.
Usually, we use complex faults to model intraplate megathrust faults such as the big subduc-
tion structures active in the Pacific (Sumatra, South America, Japan) but this source typology
can be used also to create - for example - listric fault sources with a realistic geometry.
2.1 Source typologies 31
As with the previous examples, the red text highlights the parameters used to specify the
source geometry, the parameters in green describe the rupture mechanism, the text in blue
describes the magnitude-frequency distribution and the gray text describes the rupture
properties.
Source data
• The characteristic rupture surface is defined through one of the following options:
2.1 Source typologies 33
• A frequency-magnitude distribution.
• Rake angle (specified following the Aki-Richards convention; see Aki and Richards,
(2002)).
• Upper and lower depth values limiting the seismogenic interval.
The non-parametric fault typology requires that the user indicates the rupture properties
(rupture surface, magnitude, rake and hypocentre) and the corresponding probabilities of the
rupture. The probabilities are given as a list of floating point values that correspond to the
probabilities of 0, 1, 2, . . . ...N occurrences of the rupture within the specified investigation
time. Note that there is not, at present, any internal check to ensure that the investigation
time to which the probabilities refer corresponds to that specified in the configuration file.
As the surface of the rupture is set explicitly, no rupture floating occurs, and, as in the case
of the characteristic fault source, the rupture surface can be defined as either a single planar
rupture, a list of planar ruptures, a simple fault source geometry, a complex fault source
geometry, or a combination of different geometries.
Comprehensive examples enumerating the possible configurations are shown below:
36 Chapter 2. Introduction to the Hazard Module
Listing 9 – Example non-parametric fault with planar and multi-planar fault geometry
2.1 Source typologies 37
100
Number of eqks, N(m) [ev/yr]
10-1
Figure 2.6 shows the magnitude-frequency distribution obtained using the parameters
of the considered example.
Hybrid Characteristic earthquake model (à la Youngs and Coppersmith, 1985) The
hybrid characteristic earthquake model, presented by Youngs and Coppersmith, 1985,
distributes seismic moment proportionally between a characteristic model (for larger
magnitudes) and an exponential model. The rate of events is dependent on the
magnitude of the characteristic earthquake, the b-value and the total moment rate of
40 Chapter 2. Introduction to the Hazard Module
100
10-1
the system (Figure 2.7). However, the total moment rate may be defined in one of
two ways. If the total moment rate of the source is known, as may be the case for a
fault surface with known area and slip rate, then the distribution can be defined from
the total moment rate (in N-m) of the source directly. Alternatively, the distribution
can be defined from the rate of earthquakes in the characteristic bin, which may be
preferable if the distribution is determined from observed seismicity behaviour. The
option to define the distribution according to the total moment rate is input as:
whereas the option to define the distribution from the rate of the characteristic events
is given as:
Note that in this distribution the width of the magnitude bin must be defined explicitly
in the model.
“Arbitrary” Magnitude Frequency Distribution The arbitrary magnitude frequency distri-
bution is another non-parametric form of MFD, in which the rates are defined explicitly.
Here, the magnitude frequency distribution is defined by a list of magnitudes and
their corresponding rates of occurrence. There is no bin-width as the rates correspond
exactly to the specific magnitude. Unlike the evenly discretised MFD, there is no
requirement that the magnitudes be equally spaced. This distribution (illustrated in
2.3 Magnitude-scaling relationships 41
10-1
10-2
10-3
<arbitraryMFD>
<occurRates>0.12 0.036 0.067 0.2</occurRates>
<magnitudes>8.1 8.47 8.68 9.02</magnitude>
</arbitraryMFD>
100
10-1
10-2
StrasserIntraslab respectively.
• Thingbaijam and Mai, 2017 - Define magnitude scaling relationships for interface.
Only the magnitude to rupture-area scaling relationships are implemented here, and
are called with the keywords ThingbaijamInterface.
• EPRI, 2011 - Defines a single magnitude to rupture-area scaling relationship for use
in the central and eastern United States: Ar ea = 10.0 MW −4.336 . It is called with the
keyword CEUS2011
• PeerMSR defines a simple magnitude scaling relation used as part of the Pacific Earth-
quake Engineering Research Center verification of probabilistic seismic hazard analysis
programs: Ar ea = 10.0 MW −4.0 .
• PointMSR approximates a ‘point’ source by returning an infinitesimally small area for
all magnitudes. Should only be used for distributed seismicity sources and not for fault
sources.
2.4 Calculation workflows 43
The hazard component of the OpenQuake-engine can compute seismic hazard using various
approaches. Three types of analysis are currently supported:
• Classical Probabilistic Seismic Hazard Analysis (PSHA), allowing calculation of hazard
curves and hazard maps following the classical integration procedure (Cornell, 1968,
McGuire, (1976)) as formulated by Field et al., 2003.
• Event-Based Probabilistic Seismic Hazard Analysis, allowing calculation of ground-
motion fields from stochastic event sets. Traditional results - such as hazard curves -
can be obtained by post- processing the set of computed ground-motion fields.
• Scenario Based Seismic Hazard Analysis (SSHA), allowing the calculation of ground
motion fields from a single earthquake rupture scenario taking into account ground-
motion aleatory variability.
Each workflow has a modular structure, so that intermediate results can be exported and
analyzed. Each calculator can be extended independently of the others so that additional
calculation options and methodologies can be easily introduced, without affecting the overall
calculation workflow.
Input data for the classical Probabilistic Seismic Hazard Analysis (PSHA) consist of a PSHA
input model provided together with calculation settings.
The main calculators used to perform this analysis are the following:
1. Logic Tree Processor
The Logic Tree Processor (LTP) takes as an input the PSHA Input Model and creates
a Seismic Source Model. The LTP uses the information in the Initial Seismic Source
Models and the Seismic Source Logic Tree to create a Seismic Source Input Model (i.e.
a model describing geometry and activity rates of each source without any epistemic
uncertainty).
Following a procedure similar to the one just described the Logic Tree Processor creates
a Ground Motion model (i.e. a data structure that associates to each tectonic region
considered in the calculation a Ground Motion Prediction Equation (GMPE)).
2. Earthquake Rupture Forecast Calculator
The produced Seismic Source Input Model becomes an input information for the
Earthquake Rupture Forecast (ERF) calculator which creates a list earthquake ruptures
admitted by the source model, each one characterized by a probability of occurrence
over a specified time span.
3. Classical PSHA Calculator
The classical PSHA calculator uses the ERF and the Ground Motion model to compute
hazard curves on each site specified in the calculation settings.
44 Chapter 2. Introduction to the Hazard Module
Input data for the Event-Based PSHA - as in the case of the Classical PSHA calculator - consists
of a PSHA Input Model and a set of calculation settings.
The main calculators used to perform this analysis are:
1. Logic Tree Processor
The Logic Tree Processor works in the same way described in the description of the
Classical PSHA workflow (see Section 2.4.1 at page 39).
2. Earthquake Rupture Forecast Calculator
The Earthquake Rupture Forecast Calculator was already introduced in the description
of the PSHA workflow (see Section 2.4.1 at page 39).
3. Stochastic Event Set Calculator
The Stochastic Event Set Calculator generates a collection of stochastic event sets by
sampling the ruptures contained in the ERF according to their probability of occurrence.
A Stochastic Event Set (SES) thus represents a potential realisation of the seismicity
(i.e. a list of ruptures) produced by the set of seismic sources considered in the analysis
over the time span fixed for the calculation of hazard.
4. Ground Motion Field Calculator
The Ground Motion Field Calculator computes for each event contained in a Stochastic
Event Set a realization of the geographic distribution of the shaking by taking into
account the aleatory uncertainties in the ground- motion model. Eventually, the Ground
Motion Field calculator can consider the spatial correlation of the ground-motion during
the generation of the Ground Motion Field (GMF).
5. Event-based PSHA Calculator
The event-based PSHA calculator takes a (large) set of ground-motion fields represen-
tative of the possible shaking scenarios that the investigated area can experience over
a (long) time span and for each site computes the corresponding hazard curve.
This procedure is computationally intensive and is not recommended for investigating
the hazard over large areas.
In case of SSHA, the input data consist of a single earthquake rupture model and one or more
ground-motion models. Using the Ground Motion Field Calculator, multiple realizations of
ground shaking can be computed, each realization sampling the aleatory uncertainties in
the ground-motion model. The main calculator used to perform this analysis is the Ground
Motion Field Calculator, which was already introduced during the description of the event
based PSHA workflow (see Section 2.4.2 at page 39).
As the scenario calculator does not need to determine the probability of occurrence of the
specific rupture, but only sufficient information to parameterise the location (as a three-
dimensional surface), the magnitude and the style-of-faulting of the rupture, a more simplified
2.4 Calculation workflows 45
NRML structure is sufficient compared to the source model structures described previously
in Section 2.1. A rupture model XML can be defined in the following formats:
1. Simple Fault Rupture - in which the geometry is defined by the trace of the fault rupture,
the dip and the upper and lower seismogenic depths. An example is shown below in
Listing 12.
5 <simpleFaultRupture>
6 <magnitude>6.7</magnitude>
7 <rake>180.0</rake>
8 <hypocenter lon="-122.02750" lat="37.61744" depth="6.7"/>
9 <simpleFaultGeometry>
10 <gml:LineString>
11 <gml:posList>
12 -121.80236 37.39713
13 -121.91453 37.48312
14 -122.00413 37.59493
15 -122.05088 37.63995
16 -122.09226 37.68095
17 -122.17796 37.78233
18 </gml:posList>
19 </gml:LineString>
20 <dip>76.0</dip>
21 <upperSeismoDepth>0.0</upperSeismoDepth>
22 <lowerSeismoDepth>13.4</lowerSeismoDepth>
23 </simpleFaultGeometry>
24 </simpleFaultRupture>
25
26 </nrml>
2. Planar & Multi-Planar Rupture - in which the geometry is defined as a collection of one
or more rectangular planes, each defined by four corners. An example of a multi-planar
rupture is shown below in Listing 13.
3. Complex Fault Rupture - in which the geometry is defined by the upper, lower and (if
applicable) intermediate edges of the fault rupture. An example of a complex fault
rupture is shown below in Listing 14.
46 Chapter 2. Introduction to the Hazard Module
5 <multiPlanesRupture>
6 <magnitude>8.0</magnitude>
7 <rake>90.0</rake>
8 <hypocenter lat="-1.4" lon="1.1" depth="10.0"/>
9 <planarSurface strike="90.0" dip="45.0">
10 <topLeft lon="-0.8" lat="-2.3" depth="0.0" />
11 <topRight lon="-0.4" lat="-2.3" depth="0.0" />
12 <bottomLeft lon="-0.8" lat="-2.3890" depth="10.0" />
13 <bottomRight lon="-0.4" lat="-2.3890" depth="10.0" />
14 </planarSurface>
15 <planarSurface strike="30.94744" dip="30.0">
16 <topLeft lon="-0.42" lat="-2.3" depth="0.0" />
17 <topRight lon="-0.29967" lat="-2.09945" depth="0.0" />
18 <bottomLeft lon="-0.28629" lat="-2.38009" depth="10.0" />
19 <bottomRight lon="-0.16598" lat="-2.17955" depth="10.0" />
20 </planarSurface>
21 </multiPlanesRupture>
22
23 </nrml>
5 <complexFaultRupture>
6 <magnitude>8.0</magnitude>
7 <rake>90.0</rake>
8 <hypocenter lat="-1.4" lon="1.1" depth="10.0"/>
9 <complexFaultGeometry>
10 <faultTopEdge>
11 <gml:LineString>
12 <gml:posList>
13 0.6 -1.5 2.0
14 1.0 -1.3 5.0
15 1.5 -1.0 8.0
16 </gml:posList>
17 </gml:LineString>
18 </faultTopEdge>
19 <intermediateEdge>
20 <gml:LineString>
21 <gml:posList>
22 0.65 -1.55 4.0
23 1.1 -1.4 10.0
24 1.5 -1.2 20.0
25 </gml:posList>
26 </gml:LineString>
27 </intermediateEdge>
28 <faultBottomEdge>
29 <gml:LineString>
30 <gml:posList>
31 0.65 -1.7 8.0
32 1.1 -1.6 15.0
33 1.5 -1.7 35.0
34 </gml:posList>
35 </gml:LineString>
36 </faultBottomEdge>
37 </complexFaultGeometry>
38 </complexFaultRupture>
39
40 </nrml>
This Chapter summarises the structure of the information necessary to define a PSHA input
model to be used with the OpenQuake-engine.
Input data for probabilistic based seismic hazard analysis (Classical, Event based, Disaggre-
gation, and Uniform Hazard Spectra) are organised into:
• A general configuration file.
• A file describing the Seismic Source System, that is the set of initial source models and
associated epistemic uncertainties needed to model the seismic activity in the region
of interest.
• A file describing the Ground Motion System, that is the set of ground motion prediction
equations, per tectonic region type, needed to model the ground motion shaking in
the region of interest.
Figure 3.1 summarises the structure of a PSHA input model for the OpenQuake-engine and
the relationships between the different files.
Configuration file Seismic Source Logic Tree Initial Seismic Source Model A
...
<logicTreeBranchingLevel branchingLevelID=ID>
<logicTreeBranchSet branchSetID=ID
uncertaintyType=TYPE>
<logicTreeBranch>
<uncertaintyModel>VALUE</uncertaintyModel>
<uncertaintyWeight>WEIGHT</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
As it appears from this example, the structure of a logic tree is a set of nested elements.
A schematic representation of the elemental components of a logic tree structure is provided
in Figure 3.2. A branching level identifies the position where branching occurs while a
branch set identifies a collection of branches (i.e. individual branches) whose weights sum
to 1.
3.1 Defining Logic Trees 51
weight 1
weight1
weight 2
weight 1
weight1
weight 2
weight2 Branch Set
weight 3
weight 4
weight 2
weight2
weight N
Figure 3.2 – Generic Logic Tree structure as described in terms of branching levels, branch sets, and
individual branches.
<logicTree logicTreeID="ID">
...
</logicTree>
There are no restrictions on the number of tree levels that can be defined.
A logicTreeBranchingLevel is defined as a sequence of logicTreeBranchSet ele-
52 Chapter 3. Using the Hazard Module
<logicTree logicTreeID="ID">
...
<logicTreeBranchingLevel branchingLevelID="ID_#">
<logicTreeBranchSet branchSetID="ID_1"
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
<logicTreeBranchSet branchSetID="ID_2"
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
...
<logicTreeBranchSet branchSetID="ID_N"
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
</logicTreeBranchingLevel>
...
</logicTree>
There are no restrictions on the number of branch sets that can be defined inside a branching
level.
A branchSet is defined as a sequence of logicTreeBranch elements, each specified by
an uncertaintyModel element (a string identifying an uncertainty model; the content
of the string varies with the uncertaintyType attribute value of the branchSet element)
and the uncertaintyWeight element (specifying the probability/weight associated to the
uncertaintyModel):
<uncertaintyModel>GMPE_NAME</uncertaintyModel>
<uncertaintyModel>SOURCE_MODEL_FILE_PATH</uncertaintyModel>
<uncertaintyModel>MAX_MAGNITUDE_INCREMENT</uncertaintyModel>
<uncertaintyModel>A_VALUE B_VALUE</uncertaintyModel>
<uncertaintyModel>MAX_MAGNITUDE</uncertaintyModel>
<uncertaintyModel>DIP_INCREMENT</uncertaintyModel>
<uncertaintyModel>DIP</uncertaintyModel>
The default is the keyword ALL, which means that a branch set is by default linked to
all branches in the previous branching level. By specifying one or more branches to
which the branch set links to, non- symmetric logic trees can be defined.
• applyToSources: specifies to which source in a source model the uncertainty applies
to. Sources are specified in terms of their IDs:
• applyToSourceType: specifies to which source type the uncertainty applies to. Only
one source typology can be defined (area, point, simpleFault, complexFault),
e.g.:
applyToSources="area"
The Seismic Source System contains the model (or the models) describing position, geometry
and activity of seismic sources of engineering importance for a set of sites as well as the
possible epistemic uncertainties to be incorporated into the calculation of seismic hazard.
The optional branching levels will contain rules that modify parameters of the sources in the
initial seismic source model.
For example, if the epistemic uncertainties to be considered are source geometry and max-
imum magnitude, the modeller can create a logic tree structure with three initial seismic
source models (each one exploring a different definition of the geometry of sources) and
one branching level accounting for the epistemic uncertainty on the maximum magnitude.
Below we provide an example of such logic tree structure. Note that the uncertainty on the
maximum magnitude is specified in terms of relative increments with respect to the initial
maximum magnitude defined for each source in the initial seismic source models.
3.2 The Seismic Source System 57
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7 <logicTreeBranchSet uncertaintyType="sourceModel"
8 branchSetID="bs1">
9 <logicTreeBranch branchID="b1">
10 <uncertaintyModel>seismic_source_model_A.xml
11 </uncertaintyModel>
12 <uncertaintyWeight>0.2</uncertaintyWeight>
13 </logicTreeBranch>
14 <logicTreeBranch branchID="b2">
15 <uncertaintyModel>seismic_source_model_B.xml
16 </uncertaintyModel>
17 <uncertaintyWeight>0.3</uncertaintyWeight>
18 </logicTreeBranch>
19 <logicTreeBranch branchID="b3">
20 <uncertaintyModel>seismic_source_model_C.xml
21 </uncertaintyModel>
22 <uncertaintyWeight>0.5</uncertaintyWeight>
23 </logicTreeBranch>
24 </logicTreeBranchSet>
25 </logicTreeBranchingLevel>
26
27 <logicTreeBranchingLevel branchingLevelID="bl2">
28 <logicTreeBranchSet branchSetID="bs21"
29 uncertaintyType="maxMagGRRelative">
30 <logicTreeBranch branchID="b211">
31 <uncertaintyModel>+0.0</uncertaintyModel>
32 <uncertaintyWeight>0.6</uncertaintyWeight>
33 </logicTreeBranch>
34 <logicTreeBranch branchID="b212">
35 <uncertaintyModel>+0.5</uncertaintyModel>
36 <uncertaintyWeight>0.4</uncertaintyWeight>
37 </logicTreeBranch>
38 </logicTreeBranchSet>
39 </logicTreeBranchingLevel>
40
41 </logicTree>
42 </nrml>
model could be split by tectonic region using the following syntax in the source model logic
tree:
<?xml version="1.0" encoding="UTF-8"?>
<nrml xmlns:gml="http://www.opengis.net/gml"
xmlns="http://openquake.org/xmlns/nrml/0.5">
<logicTree logicTreeID="lt1">
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet uncertaintyType="sourceModel"
branchSetID="bs1">
<logicTreeBranch branchID="b1">
<uncertaintyModel>
active_shallow_sources.xml
stable_shallow_sources.xml
</uncertaintyModel>
<uncertaintyWeight>1.0</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
10 <logicTreeBranch branchID="b1">
11 <uncertaintyModel>
12 ChiouYoungs2008
13 </uncertaintyModel>
14 <uncertaintyWeight>1.0</uncertaintyWeight>
15 </logicTreeBranch>
16
17 </logicTreeBranchSet>
18 </logicTreeBranchingLevel>
19 </logicTree>
20 </nrml>
PSHA methodology.
Calculation type and model info
1 [general]
2 description = A demo OpenQuake-engine .ini file for classical PSHA
3 calculation_mode = classical
4 random_seed = 1024
5 [geometry]
6 region = 10.0 43.0, 12.0 43.0, 12.0 46.0, 10.0 46.0
7 region_grid_spacing = 10.0
The second option allows the definition of a number of sites where the hazard will be
computed. Each site is specified in terms of a longitude, latitude tuple. Optionally, if the
user wants to consider the elevation of the sites, a value of depth [km] can also be specified,
where positive values indicate below sea level, and negative values indicate above sea level
(i.e. the topographic surface). If no value of depth is given for a site, it is assumed to be zero.
An example is provided below:
8 [geometry]
9 sites = 10.0 43.0, 12.0 43.0, 12.0 46.0, 10.0 46.0
If the list of sites is too long the user can specify the name of a csv file as shown below:
10 [geometry]
11 sites_csv = <name_of_the_csv_file>
The format of the csv file containing the list of sites is a sequence of points (one per row)
specified in terms of the longitude, latitude tuple. Depth values are again optional. An
example is provided below:
3.4 Configuration file 61
179.0,90.0
178.0,89.0
177.0,88.0
12 [logic_tree]
13 number_of_logic_tree_samples = 0
If the seismic source logic tree and the ground motion logic tree do not contain epistemic
uncertainties the engine will create a single PSHA input.
Generation of the earthquake rupture forecast
14 [erf]
15 rupture_mesh_spacing = 5
16 width_of_mfd_bin = 0.1
17 area_source_discretization = 10
This section of the configuration file is used to specify the level of discretization of the mesh
representing faults, the grid used to delineate the area sources and, the magnitude-frequency
distribution. Note that the smaller is the mesh spacing (or the bin width) the larger are (1)
the precision in the calculation and (2) the computation demand.
In cases where the source model may contain a mixture of simple and complex ruptures it
is possible to define a different rupture mesh spacing for complex faults only. This may be
helpful in models that permit floating ruptures over large subduction sources, in which the
nearest source to site distances may be larger than 20 - 30 km, and for which a small mesh
spacing would produce a very large number of ruptures. The spacing for complex faults only
can be configured by the line:
18 complex_fault_mesh_spacing = 10
18 [site_params]
19 reference_vs30_type = measured
20 reference_vs30_value = 760.0
21 reference_depth_to_2pt5km_per_sec = 5.0
22 reference_depth_to_1pt0km_per_sec = 100.0
In this section the user specifies local soil conditions. The simplest solution is to define
uniform site conditions (i.e. all the sites have the same characteristics).
62 Chapter 3. Using the Hazard Module
Alternatively it is possible to define spatially variable soil properties in a separate file; the
engine will then assign to each investigation location the values of the closest point used to
specify site conditions.
23 [site_params]
24 site_model_file = site_model.csv
The file containing the site model has the following structure:
1 lon,lat,vs30,z1pt0,z2pt5,vs30measured,backarc
2 10.0,40.0,800.0,19.367196734,0.588625072259,0,0
3 10.1,40.0,800.0,19.367196734,0.588625072259,0,0
4 10.2,40.0,800.0,19.367196734,0.588625072259,0,0
5 10.3,40.0,800.0,19.367196734,0.588625072259,0,0
6 10.4,40.0,800.0,19.367196734,0.588625072259,0,0
Notice that the 0 for the field vs30measured means that the vs30 field is inferred, not
measured. Most of the GMPEs are not sensitive to it, so you can usually skip it. For the
backarc parameter 0 means false and this is the default, so you can skip such column. All
columns that have defaults or are not needed by the GMPEs you are using can be skipped,
while you will get an error if a relevant column is missing.
If the closest available site with soil conditions is at a distance greater than 5 km from the
investigation location, a warning is generated.
Note: For backward-compatibility reasons, the site model file can also be given in XML
format. That old format is deprecated but there are no plans to remove it any soon.
Calculation configuration
25 [calculation]
26 source_model_logic_tree_file = source_model_logic_tree.xml
27 gsim_logic_tree_file = gmpe_logic_tree.xml
28 investigation_time = 50.0
29 intensity_measure_types_and_levels = {"PGA": [0.005, ..., 2.13]}
30 truncation_level = 3
31 maximum_distance = 200.0
This section of the oq-engine configuration file specifies the parameters that are relevant for
the calculation of hazard. These include the names of the two files containing the Seismic
Source System and the Ground Motion System, the duration of the time window used
to compute the hazard, the ground motion intensity measure types and levels for which
the probability of exceedence will be computed, the level of truncation of the Gaussian
distribution of the logarithm of ground motion used in the calculation of hazard and the
maximum integration distance (i.e. the distance within which sources will contribute to the
computation of the hazard).
3.4 Configuration file 63
The maximum distance refers to the largest distance between a rupture and the target
calculation sites in order for the rupture to be considered in the PSHA calculation. This can
be input directly in terms of kilometres (as above). There may be cases, however, in which
it may be appropriate to have a different maximum source to site distance depending on
the tectonic region type. This may be used, for example, to eliminate the impact of small,
very far-field sources in regions of high attenuation (in which case maximum distance is
reduced), or conversely it may be raised to allow certain source types to contribute to the
hazard at greater distances (such as in the case when the region has lower attenuation). An
example configuration for a maximum distance in Active Shallow Crust of 150 km, and in
Stable Continental Crust of 200 km, is shown below:
An even more advanced approach is to use a maximum distance depending on the magnitude
(magnitude-distance filter): in that case the user must specify the maximum distance per a
discrete set of magnitudes. Here is an example:
maximum_distance = {’Active Shallow Crust’: [(8, 250), (7, 150), (5, 50)],
’Stable Continental Crust’: [(8, 300), (7, 200), (5, 100)]}
You should read this example as follows: for Active Shallow Crust
• keep sites within 250 km from the rupture if the magnitude is <= 8
• keep sites within 150 km from the rupture if the magnitude is <= 7
• keep sites within 50 km from the rupture if the magnitude is <= 5
• keep sites within 300 km from the rupture if the magnitude is <= 8
• keep sites within 200 km from the rupture if the magnitude is <= 7
• keep sites within 100 km from the rupture if the magnitude is <= 5
In this case the same magnitude-distance filter is used for all tectonic region types.
If the magnitude is above the maximum magnitude of the filter (in this example 8) we keep
the sites within 2000 km of the ruptures, i.e. effectively we do not filter.
Notice that the filtering has a big impact on the performance, by reducing the maximum
distance for small magnitudes one can easily speed up the calculations by 2-3 times or more,
without losing much precision.
Output
64 Chapter 3. Using the Hazard Module
31 [output]
32 export_dir = outputs/
33 # given the specified ‘intensity_measure_types_and_levels‘
34 mean_hazard_curves = true
35 quantile_hazard_curves = 0.1 0.5 0.9
36 uniform_hazard_spectra = false
37 poes = 0.1
The final section of the configuration file is the one that contains the parameters controlling
the types of output to be produced. Providing an export directory will tell OpenQuake where
to place the output files when the --exports flag is used when running the program. Setting
mean_hazard_curves to true will result in a specific output containing the mean curves of
the logic tree, likewise quantile_hazard_curves will produce separate files containing
the quantile hazard curves at the quantiles listed (0.1, 0.5 and 0.9 in the example above,
leave blank or omit if no quantiles are required). Setting uniform_hazard_spectra to
true will output the uniform hazard spectra at the same probabilities of exceedence (poes) as
those specified by the later option poes. The probabilities specified here correspond to the
set investigation time. Specifying poes will output hazard maps. For more information about
the outputs of the calculation, see the section: “Description of hazard output” (page 66).
In this section we describe the structure of the configuration file to be used to complete a
seismic hazard disaggregation. Since only a few parts of the standard configuration file need
to be changed we can use the description given in Section 3.4.1 at page 55 as a reference
and we emphasize herein major differences.
1 [general]
2 description = A demo .ini file for PSHA disaggregation
3 calculation_mode = disaggregation
4 random_seed = 1024
5 [geometry]
6 sites = 11.0 44.5
In the section it is necessary to specify the geographic coordinates of the site(s) where the
disaggregation will be performed. The coordinates of multiple site should be separated with
a comma.
Disaggregation parameters
The disaggregation parameters need to be added to the the standard configuration file. They
are shown in the following example and a description of each parameter is provided below.
3.4 Configuration file 65
7 [disaggregation]
8 poes_disagg = 0.02, 0.1
9 mag_bin_width = 1.0
10 distance_bin_width = 25.0
11 coordinate_bin_width = 1.5
12 num_epsilon_bins = 3
13 disagg_outputs = Mag_Dist_Eps Mag_Lon_Lat
As mentioned above, the user also has the option to perform disaggregation by directly spec-
ifying the intensity measure level to be disaggregated, rather than specifying the probability
of exceedance. An example is shown below:
7 [disaggregation]
8 iml_disagg = {’PGA’: 0.1}
66 Chapter 3. Using the Hazard Module
In the following we describe the sections of the configuration file that are required to complete
event based PSHA calculations
Calculation type and model info
This part is almost identical to the corresponding one described in Section 3.4.1. Note the
setting of the calculation_mode parameter which now corresponds to event_based.
1 [general]
2 description = A demo OpenQuake-engine .ini file for event based PSHA
3 calculation_mode = event_based
4 random_seed = 1024
ses_per_logic_tree_path = 5
ground_motion_correlation_model = JB2009
ground_motion_correlation_params = {"vs30_clustering": True}
The acceptable flags for the parameter vs30_clustering are False and True, with a
capital F and T respectively. 0 and 1 are also acceptable flags.
Output
This part substitutes the Output part described in the configuration file example described
in the Section 3.4.1 at page 55.
[output]
export_dir = /tmp/xxx
save_ruptures = true
ground_motion_fields = true
# post-process ground motion fields into hazard curves,
# given the specified ‘intensity_measure_types_and_levels‘
hazard_curves_from_gmfs = true
mean_hazard_curves = true
quantile_hazard_curves = 0.15 0.5 0.85
poes = 0.1 0.2
Starting from OpenQuake-engine v2.2, it is now possible to export information about the
ruptures directly in CSV format.
3.4 Configuration file 67
The option hazard_curves_from_gmfs instructs the user to use the event- based ground
motion values to provide hazard curves indicating the probabilities of exceeding the intensity
measure levels set previously in the intensity_measure_types_and_levels option.
5 [sites]
6 sites_csv = sites.csv
7
8 [rupture]
9 rupture_model_file = rupture_model.xml
10 rupture_mesh_spacing = 2.0
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [correlation]
16 ground_motion_correlation_model = JB2009
17 ground_motion_correlation_params = {"vs30_clustering": True}
18
19 [hazard_calculation]
20 intensity_measure_types = PGA, SA(0.3), SA(1.0)
21 random_seed = 42
22 truncation_level = 3.0
23 maximum_distance = 200.0
24 gsim = BooreAtkinson2008
25 number_of_ground_motion_fields = 1000
Listing 20 – Example configuration file for a scenario hazard calculation (Download example)
Most of the job configuration parameters required for running a scenario hazard calcu-
lation seen in the example in Listing 20 are the same as those described in the previous
sections for the classical PSHA calculator (Section 3.4.1) and the event-based PSHA calcu-
lator (Section 3.4.3). The set of sites at which the ground motion fields will be produced
can be specifed by using either the sites or sites_csv parameters, or the region and
region_grid_spacing parameters, similar to the classical PSHA and event-based PSHA
calculators. The parameter unique to the scenario calculator is described below:
• number_of_ground_motion_fields: this parameter is used to specify the number
68 Chapter 3. Using the Hazard Module
of Monte Carlo simulations of the ground motion values at the specified sites
• gsim: this parameter indicates the name of a ground motion prediction equa-
tion (a list of available GMPEs can be obtained using oq info -g or oq
info --gsims and these are also documented at: http://docs.openquake.org/oq-
engine/2.8/openquake.hazardlib.gsim.html)
Multiple ground motion prediction equations can be used for a scenario hazard calculation by
providing a GMPE logic tree file (described previously in Section 3.3.1) using the parameter
gsim_logic_tree_file. In this case, the OpenQuake-engine generates ground motion
fields for all GMPEs specified in the logic tree file. The branch weights in the logic tree file
are ignored in a scenario analysis and only the individual branch results are computed. Mean
or quantile ground motion fields will not be generated.
The ground motion fields will be computed at each of the sites and for each of the intensity
measure types specified in the job configuration file. The above calculation can be run using
the command line:
user@ubuntu:~\$ oq engine --run job.ini
After the calculation is completed, a message similar to the following will be displayed:
In this Chapter we provide a desciption of the main commands available for running hazard
with the oq-engine and the file formats used to represent the results of the analyses.
A general introduction on the use of the OpenQuake-engine is provided in Chapter 1 at
page 11. The reader is invited to consult this part before diving into the following sections.
The amount of information prompted during the execution of the analysis can be controlled
through the --log-level flag as shown in the example below:
In this example we ask the engine to provide an extensive amount of information (usually
not justified for a standard analysis). Alternative options are: debug, info, warn, error,
critical.
This will export the results to the results directory specified in the job.ini file.
The second option allows the user to export the computed results or just a subset of them
whenever they want. In order to obtain the list of results of the hazard calculations stored in
the oq-engine database the user can utilize the --lhc command (‘list hazard calculations’)
to list the hazard calculations:
user@ubuntu:~$ oq engine --lhc
The execution of this command will produce a list similar to the one provided below (the
numbers in red are the calculations IDs):
Subsequently the user can get the list of result stored for a specific hazard analysis by using
the --list-outputs, or --lo, command, as in the example below (note that the number
in blue emphasizes the result ID):
results (i.e. hazard curves, ground motion fields, disaggregation matrices, UHS, for each
logic-tree realisation) which reflects epistemic uncertainties introduced in the PSHA input
model. For each logic tree sample, results are computed and stored. Calculation of results
statistics (mean, standard deviation, quantiles) are supported by all the calculators.
By default, OpenQuake will export only the statistical results, i.e. mean curves and quantiles.
If the user requires the complete results for all realizations, there is a specific command for
that, ‘oq extract hazard/all‘. Beware that if the logic tree contains a large number of end
branches the process of exporting the results from each end branch can add a significant
amount of time - possibly longer than the computation time - and result in a large volume
of disk spaced being used. In this case it is best to postprocess the data programmatically.
Please contact us and we will be happy to give directions on how to do that in Python.
To export from the database the outputs (in this case hazard curves) contained in one of the
output identifies, one can do so with the following command:
Alternatively, if the user wishes to export all of the outputs associated with a particular
calculation then they can use the --export-outputs with the corresponding calculation
key:
The exports will produce one or more nrml files containing the seismic hazard curves, as
represented below in Listing 21.
Notwithstanding the intuitiveness of this file, let’s have a brief overview of the information
included. The overall content of this file is a list of hazard curves, one for each investigated
72 Chapter 4. Hazard Calculations and Results
site, computed using a PSHA input model representing one possible realisation obtained
using the complete logic tree structure.
The attributes of the hazardCurves element (see text in red) specify the path of the logic
tree used to create the seismic source model (sourceModelTreePath) and the ground
motion model (gsimTreePath) plus the intensity measure type and the investigation time
used to compute the probability of exceedance.
The IMLs element (in green in the example) contains the values of shaking used by the
engine to compute the probability of exceedance in the investigation time. For each site this
file contains a hazardCurve element which has the coordinates (longitude and latitude
in decimal degrees) of the site and the values of the probability of exceedance for all the
intensity measure levels specified in the IMLs element.
If the hazard calculation is configured to produce results including seismic hazard maps and
uniform hazard spectra, then the list of outputs would display the following:
Listing 22 shows a sample of the nrml file used to describe a hazard map, and and Listing 23
shows a sample of the nrml used to describe a uniform hazard spectrum.
This list in the inset above contains a set of ruptures (in blue) and their corresponding sets
of ground motion fields (in red). Exporting the outputs from the ruptures will produce, for
each realisation, an NRML file containing a collection of ruptures. An example is shown
below in Listing 24.
The text in red shows the part which describes the generated stochastic event sets and the
investigation time covered. Inside the <SES> tag there is a list of integers (a single integer
in this example) which are unique IDs for the seismic events associated to the rupture.
In general a rupture can occur more than once and the number of events is given by the
multiplicity attribute (in this case 1).
The text in blue emphasises the portion of the text used to describe a rupture. The information
provided describes entirely the geometry of the rupture as well as its rupturing properties
(e.g. rake, magnitude). The rupture ID is an integer that represents each rupture uniquely:
it should not be confused with the event ID.
The outputs from the GMFs can be exported either in the xml or csv formats. If the GMFs
are exported in the xml format, the oq-engine will produce an xml file for each realisation
with the corresponding ground motion fields. Listing 25 is an example of a gmf collection
NRML file containing one ground motion field:
78 Chapter 4. Hazard Calculations and Results
Listing 25 – Example ground motion field collection output file comprising a single GMF
80 Chapter 4. Hazard Calculations and Results
Exporting the outputs from the GMFs in the csv format results in two csv files illustrated
in the example files in Table 4.1 and Table 4.5. The sites csv file provides the association
between the site ids in the GMFs csv file with their latitude and longitude coordinates.
Table 4.1 – Example of a ground motion fields csv output file for an event based hazard calculation
The ‘Seismic Source Groups‘ output produces a csv file listing the tectonic region types
involved in the calculation and the effective number of ruptures generated by each of them.
An example of such a file is shown below in Table 4.2.
The ‘Realizations‘ output produces a csv file listing the source model and the combination of
ground shaking intensity models for each path sampled from the logic tree. An example of
such a file is shown below in Table 4.3.
5 <gmfCollection gsimTreePath="BooreAtkinson2008">
6 <gmfSet stochasticEventSetId="0">
7 <gmf IMT="PGA" ruptureId="0">
8 <node gmv="6.2110670E-02" lat="3.8113000E+01" lon="-1.2257000E+02"/>
9 <node gmv="8.5740492E-02" lat="3.8113000E+01" lon="-1.2211400E+02"/>
10 <node gmv="2.2349268E-01" lat="3.7910000E+01" lon="-1.2200000E+02"/>
11 <node gmv="2.1841662E-01" lat="3.8000000E+01" lon="-1.2200000E+02"/>
12 <node gmv="1.6976549E-01" lat="3.8113000E+01" lon="-1.2200000E+02"/>
13 <node gmv="3.7024489E-01" lat="3.8225000E+01" lon="-1.2200000E+02"/>
14 <node gmv="8.2447365E-02" lat="3.8113000E+01" lon="-1.2188600E+02"/>
15 </gmf>
16
17 ...
18
31 </nrml>
Listing 26 – Example ground motion field collection output file for a scenario
4.3 Description of hazard outputs 83
between the site ids in the GMFs csv file with their latitude and longitude coordinates.
Table 4.4 – Example of a ground motion fields csv output file for a scenario (Download example)
In this example, the gmfs have been computed using two different GMPEs, so the realization
indices (’rlzi’) in the first column of the example gmfs file are either 0 or 1. The gmfs file lists
the ground motion values for 100 simulations of the scenario, so the event indices (’eid’)
in the third column go from 0–99. There are seven sites with indices 0–6 (’sid’) which are
repeated in the second column for each of the 100 simulations of the event and for each of
the two GMPEs. Finally, the subsequent columns list the ground motion values for each of
the intensity measure types specified in the job configuration file.
Table 4.5 – Example of a sites csv output file for a scenario (Download example)
Classical PSHA Demos
Classical PSHA with different source typologies
Classical PSHA with non trivial logic trees
Hazard Disaggregation Demos
Event Based PSHA Demos
5. Demonstrative Examples
A number of hazard calculation demos are provided with the oq-engine installation,showing
different examples of input and configuration files, for different use cases.
This is the list of demos which illustrate how to use the oq-engine for various seismic hazard
analysis:
• AreaSourceClassicalPSHA
• CharacteristicFaultSourceCase1ClassicalPSHA
• CharacteristicFaultSourceCase2ClassicalPSHA
• CharacteristicFaultSourceCase3ClassicalPSHA
• ComplexFaultSourceClassicalPSHA
• Disaggregation
• EventBasedPSHA
• LogicTreeCase1ClassicalPSHA
• LogicTreeCase2ClassicalPSHA
• LogicTreeCase3ClassicalPSHA
• PointSourceClassicalPSHA
• SimpleFaultSourceClassicalPSHA
Since this logic tree consideres only one tectonic region (i.e. Active Shallow Crust) all
the seismic sources will belong be considered active shallow crust sources.
10 <logicTreeBranch branchID="b1">
11 <uncertaintyModel>
12 ChiouYoungs2008
13 </uncertaintyModel>
14 <uncertaintyWeight>1.0</uncertaintyWeight>
15 </logicTreeBranch>
16
17 </logicTreeBranchSet>
18 </logicTreeBranchingLevel>
19 </logicTree>
20 </nrml>
6 [geometry]
7 region = ...
8 region_grid_spacing = 5.0
9
10 [logic_tree]
11 number_of_logic_tree_samples = 0
12
13 [erf]
14 rupture_mesh_spacing = 2
15 width_of_mfd_bin = 0.1
16 area_source_discretization = 5.0
17
18 [site_params]
19 reference_vs30_type = measured
20 reference_vs30_value = 600.0
21 reference_depth_to_2pt5km_per_sec = 5.0
22 reference_depth_to_1pt0km_per_sec = 100.0
23
24 [calculation]
25 source_model_logic_tree_file = source_model_logic_tree.xml
26 gsim_logic_tree_file = gmpe_logic_tree.xml
27 investigation_time = 50.0
28 intensity_measure_types_and_levels = {"PGV": [2, 4, 6 ,8, 10, ...],
29 "PGA": [0.005, 0.007, ...],
30 "SA(0.025)": [...],
31 "SA(0.05)": [...],
32 "SA(0.1)": [...],
33 "SA(0.2)": [...],
34 "SA(0.5)": [...],
35 "SA(1.0)": [...],
36 "SA(2.0)": [...]}
37 truncation_level = 3
38 maximum_distance = 200.0
39
40 [output]
41 mean_hazard_curves = false
42 quantile_hazard_curves = 0.15, 0.50, 0.85
43 hazard_maps = true
44 uniform_hazard_spectra = true
45 poes = 0.10, 0.02
Listing 28 – Example configuration file for a classical probabilistic hazard calculation (Download
example)
88 Chapter 5. Demonstrative Examples
0.5°N 0.38
0.450 0.34
0.30
0.5°N 0.375 0°
0.26
0.300
0.22
0° 0.5°S
0.225 0.18
0.14
0.150
0.5°S 1°S 0.10
0.075
0.06
0.000 0.02
0.5°W 0° 0.5°E 0.5°W 0° 0.5°E
(a) (b)
0.475
0.5°N 0.76
0.425
0.64 0.375
0.325
0° 0.52 1°S
0.275
0.40
0.225
1.5°S
0.5°S 0.28 0.175
0.125
0.16
0.5°E 1°E 1.5°E 0.075
0.04
1°E 1.5°E 2°E 0.025
(c) (d)
Figure 5.1 – Hazard maps (for PGA, 10% in 50 years) as obtained from the different oq-engine
source typologies. (a) Point Source. (b) Area source. The solid black line represents the area
boundary. (c) Simple Fault Source. The dashed line represents the fault trace, while the solid line
the fault surface projection. (d) Complex Fault Source. The solid line represent the fault surface
projection (d)
5.1 Classical PSHA Demos 89
0.550
0.5°N
0.475 0.300
0.400 0.255
0° 1°S
0.325 0.210
0.250 0.165
1.5°S
0.5°S 0.175 0.120
0.100 0.075
0.5°E 1°E 1.5°E
0.025
1°E 1.5°E 2°E 0.030
(a) (b)
0.550
0.475
0.400
2°S
0.325
0.250
2.5°S
0.175
0.100
1°W 0.5°W 0° 0.5°E
0.025
(c)
Figure 5.2 – Hazard maps (for PGA, 10% in 50 years) as obtained from characteristic fault sources
with simple fault geometry (a), complex fault geometry (b), and collection of planar surfaces (c)
90 Chapter 5. Demonstrative Examples
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7
8 <logicTreeBranchSet uncertaintyType="sourceModel"
9 branchSetID="bs1">
10 <logicTreeBranch branchID="b1">
11 <uncertaintyModel>
12 source_model_1.xml
13 </uncertaintyModel>
14 <uncertaintyWeight>0.5</uncertaintyWeight>
15 </logicTreeBranch>
16 <logicTreeBranch branchID="b2">
17 <uncertaintyModel>
18 source_model_2.xml
19 </uncertaintyModel>
20 <uncertaintyWeight>0.5</uncertaintyWeight>
21 </logicTreeBranch>
22 </logicTreeBranchSet>
23
24 </logicTreeBranchingLevel>
25
26 </logicTree>
27 </nrml>
Listing 29 – Source model logic tree input file used in the LogicTreeCase1ClassicalPSHA demo
The two source models are defined in two separate files: source_model_1.xml and
source_model_2.xml each one associated to a corresponding weight (0.5 for both).
The GSIM logic tree file contains the structure as shown in Listing 30.
The source model contains sources belonging to Active Shallow Crust and Stable Continental
Crust, therefore the GSIM logic tree defines two branching levels, one for each considered
tectonic region type. Moreover for each tectonic region a branch set with two GMPEs is
defined: Boore and Atkinson 2008 and Chiou and Youngs 2008 for Active Shallow Crust
5.1 Classical PSHA Demos 91
3 <nrml xmlns:gml="http://www.opengis.net/gml"
4 xmlns="http://openquake.org/xmlns/nrml/0.5">
5 <logicTree logicTreeID="lt1">
6
7 <logicTreeBranchingLevel branchingLevelID="bl1">
8 <logicTreeBranchSet uncertaintyType="gmpeModel"
9 applyToTectonicRegionType="Active Shallow Crust"
10 branchSetID="bs1">
11 <logicTreeBranch branchID="b11">
12 <uncertaintyModel>
13 BooreAtkinson2008
14 </uncertaintyModel>
15 <uncertaintyWeight>0.5</uncertaintyWeight>
16 </logicTreeBranch>
17 <logicTreeBranch branchID="b12">
18 <uncertaintyModel>
19 ChiouYoungs2008
20 </uncertaintyModel>
21 <uncertaintyWeight>0.5</uncertaintyWeight>
22 </logicTreeBranch>
23 </logicTreeBranchSet>
24 </logicTreeBranchingLevel>
25
26 <logicTreeBranchingLevel branchingLevelID="bl2">
27 <logicTreeBranchSet uncertaintyType="gmpeModel"
28 applyToTectonicRegionType="Stable Continental Crust"
29 branchSetID="bs2">
30 <logicTreeBranch branchID="b21">
31 <uncertaintyModel>
32 ToroEtAl2002</uncertaintyModel>
33 <uncertaintyWeight>0.5</uncertaintyWeight>
34 </logicTreeBranch>
35 <logicTreeBranch branchID="b22">
36 <uncertaintyModel>
37 Campbell2003</uncertaintyModel>
38 <uncertaintyWeight>0.5</uncertaintyWeight>
39 </logicTreeBranch>
40 </logicTreeBranchSet>
41 </logicTreeBranchingLevel>
42
43 </logicTree>
44 </nrml>
Listing 30 – GSIM logic tree input file used in the LogicTreeCase1ClassicalPSHA demo
92 Chapter 5. Demonstrative Examples
and Toro et al. 2003 and Campbell 2003 for Stable Continental Crust. By processing the
above logic tree files using the logic tree path enumeration mode (enabled by setting in the
configuration file number_of_logic_tree_samples = 0) hazard results are computed
for 8 logic tree paths (2 source models x 2 GMPEs for Active x 2 GMPEs for Stable).
LogicTreeCase2ClassicalPSHA defines a single source model consisting of only two sources
(area and simple fault) belonging to different tectonic region types (Active Shallow Crust
and Stable Continental Region) and both characterized by a truncated Gutenberg-Richter
distribution. The logic tree defines uncertainties for G-R a and b values (three possible pairs
for each source), maximum magnitude (three values for each source) and uncertainties on
the GMPEs for each tectonic region type (two GMPE per region type).
To accommodate such a structure the GSIM logic tree is defined as shown in Listing 31.
The first branching level defines the source model. For each source, two branch-
ing levels are created, one defining uncertainties on G-R a and b values (de-
fined by setting uncertaintyType="abGRAbsolute") and G-R maximum magnitude
(uncertaintyType="maxMagGRAbsolute").
It is important to notice that each branch set is applied to a specific source by defining the
attribute applyToSources, followed by the source ID. The GSIM logic tree file is the same
as used for LogicTreeCase1ClassicalPSHA. By setting in the configuration file number_of_-
logic_tree_samples = 0, hazard results are obtained for 324 paths (1 source model x
3 (a, b) pairs for source 1 x 3 (a, b) pairs for source 2 x 3 max magnitude values for source 1
x 3 max magnitude values for source 2 x 2 GMPEs for Active Shallow Crust X 2 GMPEs for
Stable Continental Crust), see Figure 5.3.
LogicTreeCase3ClassicalPSHA illustrates an example of logic tree defining relative uncertain-
ties on G-R maximum magnitude and b value. A single source model is considered containing
two sources belonging to different tectonic region types and both characterized by a G-R
magnitude frequency distribution. The source model logic tree for this demo is as shown in
Listing 32.
After the first branching level defining the source model, two additional branching levels are
defined, one defining relative uncertainties on b value (bGRRelative applied consistently
to all sources in the source model) and the second uncertainties on maximum magnitude
(maxMagGRRelative). Similar to the other cases, two GMPEs are considered for each
tectonic region type and therefore the total number of logic tree path is 36 (1 source model
x 3 b value increments x 3 maximum magnitude increments x 2 GMPE for Active x 2 GMPEs
for Stable).
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7 <logicTreeBranchSet uncertaintyType="sourceModel"
8 branchSetID="bs1">
9 <logicTreeBranch branchID="b11">
10 <uncertaintyModel>
11 source_model.xml
12 </uncertaintyModel>
13 <uncertaintyWeight>1.0</uncertaintyWeight>
14 </logicTreeBranch>
15 </logicTreeBranchSet>
16 </logicTreeBranchingLevel>
17
18 <logicTreeBranchingLevel branchingLevelID="bl2">
19 <logicTreeBranchSet uncertaintyType="abGRAbsolute"
20 applyToSources="1"
21 branchSetID="bs21">
22 <logicTreeBranch branchID="b21">
23 <uncertaintyModel>4.6 1.1</uncertaintyModel>
24 <uncertaintyWeight>0.333</uncertaintyWeight>
25 </logicTreeBranch>
26 <logicTreeBranch branchID="b22">
27 <uncertaintyModel>4.5 1.0</uncertaintyModel>
28 <uncertaintyWeight>0.333</uncertaintyWeight>
29 </logicTreeBranch>
30 <logicTreeBranch branchID="b23">
31 <uncertaintyModel>4.4 0.9</uncertaintyModel>
32 <uncertaintyWeight>0.334</uncertaintyWeight>
33 </logicTreeBranch>
34 </logicTreeBranchSet>
35 </logicTreeBranchingLevel>
36
37 <logicTreeBranchingLevel branchingLevelID="bl3">
38 <logicTreeBranchSet uncertaintyType="abGRAbsolute"
39 applyToSources="2"
40 branchSetID="bs31">
41 <logicTreeBranch branchID="b31">
42 <uncertaintyModel>3.3 1.0</uncertaintyModel>
43 <uncertaintyWeight>0.333</uncertaintyWeight>
44 </logicTreeBranch>
45 <logicTreeBranch branchID="b32">
46 <uncertaintyModel>3.2 0.9</uncertaintyModel>
47 <uncertaintyWeight>0.333</uncertaintyWeight>
48 </logicTreeBranch>
49 <logicTreeBranch branchID="b33">
50 <uncertaintyModel>3.1 0.8</uncertaintyModel>
51 <uncertaintyWeight>0.334</uncertaintyWeight>
52 </logicTreeBranch>
53 </logicTreeBranchSet>
54 </logicTreeBranchingLevel>
94 Chapter 5. Demonstrative Examples
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7 <logicTreeBranchSet uncertaintyType="sourceModel"
8 branchSetID="bs1">
9 <logicTreeBranch branchID="b11">
10 <uncertaintyModel>
11 source_model.xml
12 </uncertaintyModel>
13 <uncertaintyWeight>1.0</uncertaintyWeight>
14 </logicTreeBranch>
15 </logicTreeBranchSet>
16 </logicTreeBranchingLevel>
17
18 <logicTreeBranchingLevel branchingLevelID="bl2">
19 <logicTreeBranchSet uncertaintyType="bGRRelative"
20 branchSetID="bs21">
21 <logicTreeBranch branchID="b21">
22 <uncertaintyModel>+0.1</uncertaintyModel>
23 <uncertaintyWeight>0.333</uncertaintyWeight>
24 </logicTreeBranch>
25 <logicTreeBranch branchID="b22">
26 <uncertaintyModel>0.0</uncertaintyModel>
27 <uncertaintyWeight>0.333</uncertaintyWeight>
28 </logicTreeBranch>
29 <logicTreeBranch branchID="b23">
30 <uncertaintyModel>-0.1</uncertaintyModel>
31 <uncertaintyWeight>0.334</uncertaintyWeight>
32 </logicTreeBranch>
33 </logicTreeBranchSet>
34 </logicTreeBranchingLevel>
35
36 <logicTreeBranchingLevel branchingLevelID="bl3">
37 <logicTreeBranchSet uncertaintyType="maxMagGRRelative"
38 branchSetID="bs31">
39 <logicTreeBranch branchID="b31">
40 <uncertaintyModel>0.0</uncertaintyModel>
41 <uncertaintyWeight>0.333</uncertaintyWeight>
42 </logicTreeBranch>
43 <logicTreeBranch branchID="b32">
44 <uncertaintyModel>+0.5</uncertaintyModel>
45 <uncertaintyWeight>0.333</uncertaintyWeight>
46 </logicTreeBranch>
47 <logicTreeBranch branchID="b33">
48 <uncertaintyModel>+1.0</uncertaintyModel>
49 <uncertaintyWeight>0.334</uncertaintyWeight>
50 </logicTreeBranch>
51 </logicTreeBranchSet>
52 </logicTreeBranchingLevel>
53
54 </logicTree>
5.2 Hazard Disaggregation Demos 95
100
10-1
10-2
10-3
PoE in 50 years
10-4
10-5
10-6
10-7
10-8 -3
10 10-2 10-1 100 101
PGA (g)
(a)
Figure 5.3 – Hazard curves as obtained from the LogicTreeCase2 demo. Solid gray lines represent
individual hazard curves from the different logic tree path (a total of 324 curves). The red dashed
line represents the mean hazard curve, while the red dotted lines depict the quantile levels (0.15,
0.5, 0.95).
[general]
description = ...
calculation_mode = disaggregation
random_seed = 23
[geometry]
sites = 0.5 -0.5
[logic_tree]
number_of_logic_tree_samples = 0
[erf]
rupture_mesh_spacing = 2
width_of_mfd_bin = 0.1
area_source_discretization = 5.0
[site_params]
reference_vs30_type = measured
reference_vs30_value = 600.0
reference_depth_to_2pt5km_per_sec = 5.0
96 Chapter 5. Demonstrative Examples
reference_depth_to_1pt0km_per_sec = 100.0
[calculation]
source_model_logic_tree_file = source_model_logic_tree.xml
gsim_logic_tree_file = gmpe_logic_tree.xml
investigation_time = 50.0
intensity_measure_types_and_levels = "PGA": [...]
truncation_level = 3
maximum_distance = 200.0
[disaggregation]
poes_disagg = 0.1
mag_bin_width = 1.0
distance_bin_width = 10.0
coordinate_bin_width = 0.2
num_epsilon_bins = 3
[output]
export_dir = ...
Disaggregation matrices are computed for a single site (located between the two sources)
for a ground motion value corresponding to a probability value equal to 0.1 (poes_disagg
= 0.1). Magnitude values are classified in one magnitude unit bins (mag_bin_width =
1.0), distances in bins of 10 km (distance_bin_width = 10.0), coordinates in bins
of 0.2 degrees (coordinate_bin_width = 0.2). 3 epsilons bins are considered (num_-
epsilon_bins = 3).
[general]
description = Event Based PSHA using Area Source
calculation_mode = event_based
random_seed = 23
[geometry]
sites = 0.5 -0.5
5.3 Event Based PSHA Demos 97
[logic_tree]
number_of_logic_tree_samples = 0
[erf]
rupture_mesh_spacing = 2
width_of_mfd_bin = 0.1
area_source_discretization = 5.0
[site_params]
reference_vs30_type = measured
reference_vs30_value = 600.0
reference_depth_to_2pt5km_per_sec = 5.0
reference_depth_to_1pt0km_per_sec = 100.0
[calculation]
source_model_logic_tree_file = source_model_logic_tree.xml
gsim_logic_tree_file = gmpe_logic_tree.xml
investigation_time = 50.0
intensity_measure_types_and_levels = "PGA": [...]
truncation_level = 3
maximum_distance = 200.0
[event_based_params]
ses_per_logic_tree_path = 100
ground_motion_correlation_model =
ground_motion_correlation_params =
[output]
export_dir = ...
ground_motion_fields = true
hazard_curves_from_gmfs = true
mean_hazard_curves = false
quantile_hazard_curves =
hazard_maps = true
poes = 0.1
The source model consist of one source (area). 100 stochastic event sets are generated (ses_-
per_logic_tree_path = 100) (an example can be seen in Figure 5.4). Ground motion
fields are computed (ground_motion_fields = true, Figure 5.5) and also hazard curves
from ground motion fields are extracted (hazard_curves_from_gmfs = true). The
98 Chapter 5. Demonstrative Examples
1°N
0.5°N
0°
0.5°S
1°S
1.5°S
(a)
Figure 5.4 – A stochastic event set generated with the event based PSHA demo. The area source
defines a nodal plane distribution which distributes events among vertical and dipping (50 degrees)
faults with equal weights. Vertical ruptures are then distributed equally in the range 0-180 degrees
while the dipping ones in the range 0-360, both with a step of 45 degrees.
corresponding hazard maps for 0.1 probability are also calculated (hazard_maps = true)
5.3 Event Based PSHA Demos 99
1.05
0.64
0.90
0.56
0.75 0.48
0° 0°
0.60 0.40
0.32
0.45
0.15
0.08
0.00 0.00
0.5°W 0° 0.5°W 0°
(a) (b)
Figure 5.5 – Ground motion fields (PGA) with no spatial correlations (a) and with spatial correlation
(b)
Part III
Risk
Scenario Damage Assessment
Scenario Risk Assessment
Classical Probabilistic Seismic Damage Analysis
Classical Probabilistic Seismic Risk Analysis
Stochastic Event Based Probabilistic Seismic Risk Analysis
Retrofit Benefit-Cost Ratio Analysis
The seismic risk results are calculated using the OpenQuake risk library (oq-risklib), an open-
source suite of tools for seismic risk assessment and loss estimation. This library is written in
the Python programming language and available in the form of a “developers” release at the
following location: https://github.com/gem/oq-engine/tree/master/openquake/risklib.
The risk component of the OpenQuake-engine can compute both scenario-based and proba-
bilistic seismic damage and risk using various approaches. The following types of analysis
are currently supported:
• Scenario Damage Assessment, for the calculation of damage distribution statistics for
a portfolio of buildings from a single earthquake rupture scenario taking into account
aleatory and epistemic ground-motion variability.
• Scenario Risk Assessment, for the calculation of individual asset and portfolio loss
statistics due to a single earthquake rupture scenario taking into account aleatory and
epistemic ground-motion variability. Correlation in the vulnerability of different assets
of the same typology can also be taken into consideration.
• Classical Probabilistic Seismic Damage Analysis, for the calculation of damage state
probabilities over a specified time period, and probabilistic collapse maps, starting from
the hazard curves computed following the classical integration procedure (Cornell,
1968, McGuire, (1976)) as formulated by Field et al., 2003.
• Classical Probabilistic Seismic Risk Analysis, for the calculation of loss curves and
loss maps, starting from the hazard curves computed following the classical integration
procedure (Cornell, 1968, McGuire, (1976)) as formulated by Field et al., 2003.
• Stochastic Event Based Probabilistic Seismic Risk Analysis, for the calculation of
event loss tables starting from stochastic event sets. Other results such as loss-
exceedance curves, probabilistic loss maps, and average annual losses can be obtained
by post-processing the event loss tables.
104 Chapter 6. Introduction to the Risk Module
OQ
hazardlib
for a scenario damage calculation in addition to fragility models files, in order to estimate
consequences based on the calculated damage distribution. The user may provide one
consequence model file corresponding to each loss type (amongst structural, nonstructural,
contents, and business interruption) for which a fragility model file is provided. Whereas
providing a fragility model file for at least one loss type is mandatory for running a Scenario
Damage calculation, providing corresponding consequence model files is optional.
the provided probabilistic vulnerability model taking into consideration the correlation model
for vulnerability of different assets of a given taxonomy. Finally loss statistics, i.e., the mean
loss and standard deviation of loss for ground-up losses across all realizations, are calculated
for each asset. Mean loss maps are also generated by this calculator, describing the mean
ground-up losses caused by the scenario event for the different assets in the exposure model.
The required input files required for running a scenario risk calculation and the resulting
output files are depicted in Figure 6.2.
Configuration file
OQ
hazardlib
Configuration file
OQ
hazardlib
The classical PSHA-based risk calculator convolves through numerical integration, the prob-
abilistic vulnerability functions for an asset with the seismic hazard curve at the location
of the asset, to give the loss distribution for the asset within a specified time period. The
calculator requires the definition of an exposure model, a vulnerability model for each loss
type of interest with vulnerability functions for each taxonomy represented in the exposure
model, and hazard curves calculated in the region of interest. Loss curves and loss maps
can currently be calculated for five different loss types using this calculator: structural
losses, nonstructural losses, contents losses, downtime losses, and occupant fatalities. The
main results of this calculator are loss exceedance curves for each asset, which describe the
probability of exceedance of different loss levels over the specified time period, and loss maps
for the region, which describe the loss values that have a given probability of exceedance
over the specified time
Unlike the probabilistic event-based risk calculator, an aggregate loss curve (considering all
assets in the exposure model) can not be extracted using this calculator, as the correlation of
the ground motion residuals and vulnerability uncertainty is not taken into consideration in
this calculator.
The hazard curves required for this calculator can be calculated by the OpenQuake-engine for
all asset locations in the exposure model using the classical PSHA approach (Cornell, 1968;
McGuire, 1976). The use of logic- trees allows for the consideration of model uncertainty in
the choice of a ground motion prediction equation for the different tectonic region types in the
region. Unlike what was described in the previous calculator, a total loss curve (considering
all assets in the exposure model) can not be extracted using this calculator, as the correlation
of the ground motion residuals and vulnerability uncertainty is not taken into consideration.
The required input files required for running a classical probabilistic risk calculation and the
108 Chapter 6. Introduction to the Risk Module
Configuration file
OQ
hazardlib
generated by a seismic source, the number of occurrences in the given time span T is
simulated by sampling the corresponding probability distribution as given by Prup (k|T ). A
SES is therefore a sample of the full population of earthquake ruptures as defined by a Seismic
Source Model. Each earthquake rupture is present zero, one or more times, depending on its
probability. Symbolically, we can define a SES as:
SES(T ) = k × rup, k ∼ Prup (k|T ) ∀ rup in Sr c ∀ Sr c in SSM (6.1)
where k, the number of occurrences, is a random sample of Prup (k|T ), and k × rup means
that earthquake rupture rup is repeated k times in the SES.
For each earthquake rupture or event in the SESs, a spatially correlated GMF realisation
is generated, taking into consideration both the inter-event variability of ground motions,
and the intra-event residuals obtained from a spatial correlation model for ground motion
residuals. The use of logic trees allows for the consideration of uncertainty in the choice of a
Seismic Source Model, and in the choice of ground-motion models for the different tectonic
regions.
For each GMF realization, a loss ratio is sampled for every asset in the exposure model using
the provided probabilistic vulnerability model, taking into consideration the correlation
model for vulnerability of different assets of a given taxonomy. Finally loss exceedance
curves are computed for ground-up losses.
The required input files required for running a probabilistic stochastic event-based risk
calculation and the resulting output files are depicted in Figure 6.5
Configuration file
Loss maps
Exposure Model
OQ
risklib
Loss Curves
point of view. For this assessment, the expected losses considering the original and retrofitted
configuration of the buildings are estimated, and the economic benefit due to the better
seismic design is divided by the retrofitting cost, leading to the benefit/cost ratio. These loss
curves are computed using the previously described Classical PSHA- based Risk calculator.
The output of this calculator is a benefit/cost ratio for each asset, in which a ratio above one
indicates that employing a retrofitting intervention is economically viable.
In Figure 6.6, the input/output structure for this calculator is depicted.
Configuration file
Exposure Model
OQ OQ
risklib risklib
Loss Curves BCR Map
OQ
hazardlib
For further information regarding the theoretical background of the methodologies used for
each calculator, users are referred to the OpenQuake- engine Book (Risk).
Exposure Models
Fragility Models
Consequence Models
Vulnerability Models
The following sections describe the basic inputs required for a risk calculation, including ex-
posure models, fragility models, consequence models, and vulnerability models. In addition,
each risk calculator also requires the appropriate hazard inputs computed in the region of
interest. Hazard inputs include hazard curves for the classical probabilistic damage and risk
calculators, GMF for the scenario damage and risk calculators, or SESs for the probabilistic
event based calculators.
The information in the metadata section is common to all of the assets in the portfolio and
needs to be incorporated at the beginning of every exposure model file. There are a number
of parameters that compose the metadata section, which is intended to provide general
112 Chapter 7. Risk Input Models
5 <exposureModel id="exposure_example"
6 category="buildings"
7 taxonomySource="GEM_Building_Taxonomy_2.0">
8 <description>Exposure Model Example</description>
9
10 <conversions>
11 <costTypes>
12 <costType name="structural" type="per_area" unit="USD" />
13 </costTypes>
14 <area type="per_asset" unit="SQM" />
15 </conversions>
16
17 <assets>
18 <asset id="a1" taxonomy="Adobe" number="5" area="100" >
19 <location lon="-122.000" lat="38.113" />
20 <costs>
21 <cost type="structural" value="10000" />
22 </costs>
23 <occupancies>
24 <occupancy occupants="20" period="day" />
25 </occupancies>
26 </asset>
27 </assets>
28
29 </exposureModel>
30
31 </nrml>
information regarding the assets within the exposure model. These parameters are described
below:
• id: mandatory; a unique string used to identify the exposure model. This string can
contain letters (a–z; A–Z), numbers (0–9), dashes (–), and underscores (_), with a
maximum of 100 characters.
• category: an optional string used to define the type of assets being stored (e.g:
buildings, lifelines).
• taxonomySource: an optional attribute used to define the taxonomy being used to
classify the assets.
• description: mandatory; a brief string (ASCII) with further information about the
exposure model.
Next, let us look at the part of the file describing the area and cost conversions:
10 <conversions>
11 <costTypes>
12 <costType name="structural" type="per_area" unit="USD" />
13 </costTypes>
14 <area type="per_asset" unit="SQM" />
15 </conversions>
Notice that the costType element defines a name, a type, and a unit attribute.
The NRML schema for the exposure model allows the definition of a structural cost, a
nonstructural components cost, a contents cost, and a business interruption or downtime cost
for each asset in the portfolio. Thus, the valid values for the name attribute of the costType
element are the following:
• structural: used to specify the structural replacement cost of assets
• nonstructural: used to specify the replacement cost for the nonstructural compo-
nents of assets
• contents: used to specify the contents replacement cost
• business_interruption: used to specify the cost that will be incurred per unit
time that a damaged asset remains closed following an earthquake
The exposure model shown in the example above defines only the structural values for the
assets. However, multiple cost types can be defined for each asset in the same exposure
model.
The unit attribute of the costType element is used for specifying the currency unit for the
corresponding cost type. Note that the OpenQuake-engine itself is agnostic to the currency
units; the unit is thus a descriptive attribute which is used by the OpenQuake-engine to
annotate the results of a risk assessment. This attribute can be set to any valid Unicode
string.
The type attribute of the costType element specifies whether the costs will be provided as
an aggregated value for an asset, or per building or unit comprising an asset, or per unit
area of an asset. The valid values for the type attribute of the costType element are the
114 Chapter 7. Risk Input Models
following:
If the costs are to be specified per_area for any of the costTypes, the area element will
also need to be defined in the conversions section. The area element defines a type, and a
unit attribute.
The unit attribute of the area element is used for specifying the units for the area of an asset.
The OpenQuake-engine itself is agnostic to the area units; the unit is thus a descriptive
attribute which is used by the OpenQuake-engine to annotate the results of a risk assessment.
This attribute can be set to any valid ASCII string.
The type attribute of the area element specifies whether the area will be provided as an
aggregated value for an asset, or per building or unit comprising an asset. The valid values
for the type attribute of the area element are the following:
• aggregated: indicates that the area will be provided as an aggregated value for each
asset
• per_asset: indicates that the area will be provided per building or unit comprising
each asset
The way the information about the characteristics of the assets in an exposure model are
stored can vary strongly depending on how and why the data was compiled. As an example,
if national census information is used to estimated the distribution of assets in a given region,
it is likely that the number of buildings within a given geographical area will be used to
define the dataset, and will be used for estimating the number of collapsed buildings for a
scenario earthquake. On the other hand, if simplified methodologies based on proxy data
such as population distribution are used to develop the exposure model, then it is likely that
the built up area or economic cost of each building typology will be directly derived, and
will be used for the estimation of economic losses.
Finally, let us look at the part of the file describing the set of assets in the portfolio to be used
in seismic damage or risk calculations:
7.1 Exposure Models 115
17 <assets>
18 <asset id="a1" taxonomy="Adobe" number="5" area="100" >
19 <location lon="-122.000" lat="38.113" />
20 <costs>
21 <cost type="structural" value="10000" />
22 </costs>
23 <occupancies>
24 <occupancy occupants="20" period="day" />
25 </occupancies>
26 </asset>
27 </assets>
Each asset definition involves specifiying a set of mandatory and optional attributes con-
cerning the asset. The following set of attributes can be assigned to each asset based on the
current schema for the exposure model:
• id: mandatory; a unique string used to identify the given asset, which is used by the
OpenQuake-engine to relate each asset with its associated results. This string can
contain letters (a–z; A–Z), numbers (0–9), dashes (-), and underscores (_), with a
maximum of 100 characters.
• taxonomy: mandatory; this string specifies the building typology of the given asset.
The taxonomy strings can be user-defined, or based on an existing classification scheme
such as the GEM Taxonomy, PAGER, or EMS-98.
• number: the number of individual structural units comprising a given asset. This
attribute is mandatory for damage calculations. For risk calculations, this attribute
must be defined if either the area or any of the costs are provided per structural unit
comprising each asset.
• area: area of the asset, at a given location. As mentioned earlier, the area is a
mandatory attribute only if any one of the costs for the asset is specified per unit area.
• location: mandatory; specifies the longitude (between -180◦ to 180◦ ) and latitude
(between -90◦ to 90 ◦ ) of the given asset, both specified in decimal degrees1 .
• costs: specifies a set of costs for the given asset. The replacement value for different
cost types must be provided on separate lines within the costs element. As shown
in the example above, each cost entry must define the type and the value. Cur-
rently supported valid options for the cost type are: structural, nonstructural,
contents, and business_interruption.
• occupancies: mandatory only for probabilistic or scenario risk calculations that
specify an occupants_vulnerability_file. Each entry within this element spec-
ifies the number of occupants for the asset for a particular period of the day. As
shown in the example above, each occupancy entry must define the period and the
occupants. Currently supported valid options for the period are: day, transit,
1
Within the OpenQuake-engine, longitude and latitude coordinates are internally rounded to a precision of 5
digits after the decimal point.
116 Chapter 7. Risk Input Models
and night. Currently, the number of occupants for an asset can only be provided as
an aggregated value for the asset.
For the purposes of performing a retrofitting benefit/cost analysis, it is also necessary to
define the retrofitting cost (retrofitted). The combination between the possible options
in which these three attributes can be defined leads to four ways of storing the information
about the assets. For each of these cases a brief explanation and example is provided in this
section.
Example 1
This example illustrates an exposure model in which the aggregated cost (structural, non-
structural, contents and business interruption) of the assets of each taxonomy for a set
of locations is directly provided. Thus, in order to indicate how the various costs will be
defined, the following information needs to be stored in the exposure model file, as shown
in Listing 34.
8 <description>
9 Exposure model with aggregated replacement costs for each asset
10 </description>
11 <conversions>
12 <costTypes>
13 <costType name="structural" type="aggregated" unit="USD" />
14 <costType name="nonstructural" type="aggregated" unit="USD" />
15 <costType name="contents" type="aggregated" unit="USD" />
16 <costType name="business_interruption" type="aggregated" unit="USD/month"/>
17 </costTypes>
18 </conversions>
Listing 34 – Example exposure model using aggregate costs: metadata definition (Download
example)
In this case, the cost type of each component as been defined as aggregated. Once the
way in which each cost is going to be defined has been established, the values for each asset
can be stored according to the format shown in Listing 35.
Each asset is uniquely identified by its id. Then, a pair of coordinates (latitude and longi-
tude) for a location where the asset is assumed to exist is defined. Each asset must be
classified according to a taxonomy, so that the OpenQuake-engine is capable of employing
the appropriate vulnerability function or fragility function in the risk calculations. Finally,
the cost values of each type are stored within the costs attribute. In this example, the
aggregated value for all structural units (within a given asset) at each location is provided
directly, so there is no need to define other attributes such as number or area. This mode
of representing an exposure model is probably the simplest one.
7.1 Exposure Models 117
19 <assets>
20 <asset id="a1" taxonomy="Adobe" >
21 <location lon="-122.000" lat="38.113" />
22 <costs>
23 <cost type="structural" value="20000" />
24 <cost type="nonstructural" value="30000" />
25 <cost type="contents" value="10000" />
26 <cost type="business_interruption" value="4000" />
27 </costs>
28 </asset>
29 </assets>
Listing 35 – Example exposure model using aggregate costs: assets definition (Download example)
Example 2
In the snippet shown in Listing 36, an exposure model containing the number of structural
units and the associated costs per unit of each asset is presented.
8 <description>
9 Exposure model with replacement costs per building for each asset
10 </description>
11 <conversions>
12 <costTypes>
13 <costType name="structural" type="per_asset" unit="USD" />
14 <costType name="nonstructural" type="per_asset" unit="USD" />
15 <costType name="contents" type="per_asset" unit="USD" />
16 <costType name="business_interruption" type="per_asset" unit="USD/month"/>
17 </costTypes>
18 </conversions>
Listing 36 – Example exposure model using costs per unit: metadata definition (Download example)
For this case, the cost type has been set to per_asset. Then, the information from each
asset can be stored following the format shown in Listing 37.
In this example, the various costs for each asset is not provided directly, as in the previous
example. In order to carry out the risk calculations in which the economic cost of each asset
is provided, the OpenQuake-engine multiplies, for each asset, the number of units (buildings)
by the “per asset” replacement cost. Note that in this case, there is no need to specify the
attribute area.
Example 3
The example shown in Listing 38 comprises an exposure model containing the built up area
of each asset, and the associated costs are provided per unit area.
In order to compile an exposure model with this structure, the cost type should be set
118 Chapter 7. Risk Input Models
19 <assets>
20 <asset id="a1" number="2" taxonomy="Adobe" >
21 <location lon="-122.000" lat="38.113" />
22 <costs>
23 <cost type="structural" value="7500" />
24 <cost type="nonstructural" value="11250" />
25 <cost type="contents" value="3750" />
26 <cost type="business_interruption" value="1500" />
27 </costs>
28 </asset>
29 </assets>
Listing 37 – Example exposure model using costs per unit: assets definition (Download example)
8 <description>
9 Exposure model with replacement costs per unit area;
10 and areas provided as aggregated values for each asset
11 </description>
12 <conversions>
13 <area type="aggregated" unit="SQM" />
14 <costTypes>
15 <costType name="structural" type="per_area" unit="USD" />
16 <costType name="nonstructural" type="per_area" unit="USD" />
17 <costType name="contents" type="per_area" unit="USD" />
18 <costType name="business_interruption" type="per_area" unit="USD/month"/>
19 </costTypes>
20 </conversions>
Listing 38 – Example exposure model using costs per unit area and aggregated areas: metadata
definition (Download example)
7.1 Exposure Models 119
to per_area. In addition, it is also necessary to specify if the area that is being store
represents the aggregated area of number of units within an asset, or the average area of
a single unit. In this particular case, the area that is being stored is the aggregated built
up area per asset, and thus this attribute was set to aggregated. Listing 39 illustrates the
definition of the assets for this example.
21 <assets>
22 <asset id="a1" area="1000" taxonomy="Adobe" >
23 <location lon="-122.000" lat="38.113" />
24 <costs>
25 <cost type="structural" value="5" />
26 <cost type="nonstructural" value="7.5" />
27 <cost type="contents" value="2.5" />
28 <cost type="business_interruption" value="1" />
29 </costs>
30 </asset>
31 </assets>
Listing 39 – Example exposure model using costs per unit area and aggregated areas: assets
definition (Download example)
Once again, the OpenQuake-engine needs to carry out some calculations in order to compute
the different costs per asset. In this case, this value is computed by multiplying the aggregated
built up area of each asset by the associated cost per unit area. Notice that in this case,
there is no need to specify the attribute number.
Example 4
This example demonstrates an exposure model that defines the number of structural units
for each asset, the average built up area per structural unit and the associated costs per unit
area. Listing 40 shows the metadata definition for an exposure model built in this manner.
Similarly to what was described in the previous example, the various costs type also need to
be established as per_area, but the type of area is now defined as per_asset. Listing 41
illustrates the definition of the assets for this example.
In this example, the OpenQuake-engine will make use of all the parameters to estimate the
various costs of each asset, by multiplying the number of structural units by its average built
up area, and then by the respective cost per unit area.
Example 5
In this example, additional information will be included, which is required for other risk
analysis besides loss estimation, such as the benefit/cost analysis.
In order to perform a benefit/cost assessment, it is necessary to indicate the retrofitting cost.
This parameter is handled in the same manner as the structural cost, and it should be stored
according to the format shown in Listing 42.
120 Chapter 7. Risk Input Models
8 <description>
9 Exposure model with replacement costs per unit area;
10 and areas provided per building for each asset
11 </description>
12 <conversions>
13 <area type="per_asset" unit="SQM" />
14 <costTypes>
15 <costType name="structural" type="per_area" unit="USD" />
16 <costType name="nonstructural" type="per_area" unit="USD" />
17 <costType name="contents" type="per_area" unit="USD" />
18 <costType name="business_interruption" type="per_area" unit="USD/month"/>
19 </costTypes>
20 </conversions>
Listing 40 – Example exposure model using costs per unit area and areas per unit: metadata
definition (Download example)
21 <assets>
22 <asset id="a1" number="3" area="400" taxonomy="Adobe" >
23 <location lon="-122.000" lat="38.113" />
24 <costs>
25 <cost type="structural" value="10" />
26 <cost type="nonstructural" value="15" />
27 <cost type="contents" value="5" />
28 <cost type="business_interruption" value="2" />
29 </costs>
30 </asset>
31 </assets>
Listing 41 – Example exposure model using costs per unit area and areas per unit: assets definition
(Download example)
7.1 Exposure Models 121
5 <exposureModel id="exposure_example"
6 category="buildings"
7 taxonomySource="GEM_Building_Taxonomy_2.0">
8 <description>Exposure model illustrating retrofit costs</description>
9 <conversions>
10 <costTypes>
11 <costType name="structural" type="aggregated" unit="USD"
12 retrofittedType="per_asset" retrofittedUnit="USD" />
13 </costTypes>
14 </conversions>
15 <assets>
16 <asset id="a1" taxonomy="Adobe" number="1" >
17 <location lon="-122.000" lat="38.113" />
18 <costs>
19 <cost type="structural" value="10000" retrofitted="2000" />
20 </costs>
21 </asset>
22 </assets>
23 </exposureModel>
24
25 </nrml>
Despite the fact that for the demonstration of how the retrofitting cost can be stored the per
building type of cost structure described in Example 1 was used, it is important to mention
that any of the other cost storing approaches can also be employed (Examples 2–4).
Example 6
The OpenQuake-engine is also capable of estimating human losses, based on the number of
occupants in an asset, at a certain time of the day. The example exposure model shown in
Listing 43 illustrates how this parameter is defined for each asset. In addition, this example
also serves the purpose of presenting an exposure model in which three cost types have been
defined using three different options.
As previously mentioned, in this example only three costs are being stored, and each one
follows a different approach. The structural cost is being defined as the aggregate replace-
ment cost for all of the buildings comprising the asset (Example 1), the nonstructural
value is defined as the replacement cost per unit area where the area is defined per building
comprising the asset (Example 4), and the contents and business_interruption val-
ues are provided per building comprising the asset (Example 2). The number of occupants
at different times of the day are also provided as aggregated values for all of the buildings
comprising the asset.
Example 7
Starting from OpenQuake-engine v2.7, the user may also provide a set of tags for each asset
in the exposure model. The primary intended use case for the tags is to enable aggregation
or accumulation of risk results (casualties / damages / losses) for each tag. The tags could
be used to specify location attributes, occupancy types, or insurance policy codes for the
different assets in the exposure model.
The example exposure model shown in Listing 44 illustrates how one or more tags can be
defined for each asset.
The list of tag names that will be used in the exposure model must be provided in the
metadata section of the exposure file, as shown in the following snippet from the full file:
The tag values for the different tags can then be specified for each asset as shown in the
following snippet from the same file:
Note that it is not mandatory that every tag name specified in the metadata section must be
provided with a tag value for each asset.
Example 8
Starting from OpenQuake-engine v3.0, the exposure model may be provided using csv files
listing the asset information, along with an xml file conatining the metadata section for the
7.1 Exposure Models 123
5 <exposureModel id="exposure_example"
6 category="buildings"
7 taxonomySource="GEM_Building_Taxonomy_2.0">
8 <description>Exposure model example with occupants</description>
9 <conversions>
10 <costTypes>
11 <costType name="structural" type="aggregated" unit="USD" />
12 <costType name="nonstructural" type="per_area" unit="USD" />
13 <costType name="contents" type="per_asset" unit="USD" />
14 <costType name="business_interruption" type="per_asset" unit="USD/month" />
15 </costTypes>
16 <area type="per_asset" unit="SQM" />
17 </conversions>
18 <assets>
19 <asset id="a1" taxonomy="Adobe" number="5" area="200" >
20 <location lon="-122.000" lat="38.113" />
21 <costs>
22 <cost type="structural" value="20000" />
23 <cost type="nonstructural" value="15" />
24 <cost type="contents" value="2400" />
25 <cost type="business_interruption" value="1500" />
26 </costs>
27 <occupancies>
28 <occupancy occupants="6" period="day" />
29 <occupancy occupants="10" period="transit" />
30 <occupancy occupants="20" period="night" />
31 </occupancies>
32 </asset>
33 </assets>
34 </exposureModel>
35
36 </nrml>
Listing 43 – Example exposure model specifying the aggregate number of occupants per asset
(Download example)
124 Chapter 7. Risk Input Models
5 <exposureModel id="exposure_example_with_tags"
6 category="buildings"
7 taxonomySource="GEM_Building_Taxonomy_2.0">
8 <description>Exposure Model Example with Tags</description>
9
10 <conversions>
11 <costTypes>
12 <costType name="structural" type="per_area" unit="USD" />
13 </costTypes>
14 <area type="per_asset" unit="SQM" />
15 </conversions>
16
19 <assets>
20 <asset id="a1" taxonomy="Adobe" number="5" area="100" >
21 <location lon="-122.000" lat="38.113" />
22 <costs>
23 <cost type="structural" value="10000" />
24 </costs>
25 <occupancies>
26 <occupancy occupants="20" period="day" />
27 </occupancies>
28 <tags state="California" county="Solano" tract="252702"
29 city="Suisun" zip="94585" cresta="A.11"/>
30 </asset>
31 </assets>
32
33 </exposureModel>
34
35 </nrml>
Listing 44 – Example exposure model specifying six location based tags for each asset (Download
example)
7.1 Exposure Models 125
5 <exposureModel id="exposure_example_with_csv_files"
6 category="buildings"
7 taxonomySource="GEM_Building_Taxonomy_3.0">
8 <description>Exposure Model Example with CSV Files</description>
9
10 <conversions>
11 <costTypes>
12 <costType name="structural" type="aggregated" unit="USD" />
13 <costType name="nonstructural" type="aggregated" unit="USD" />
14 <costType name="contents" type="aggregated" unit="USD" />
15 </costTypes>
16 <area type="per_asset" unit="SQFT" />
17 </conversions>
18
19 <occupancyPeriods>night</occupancyPeriods>
20
23 <assets>
24 Washington.csv
25 Oregon.csv
26 California.csv
27 </assets>
28
29 </exposureModel>
30
31 </nrml>
Listing 45 – Example exposure model using csv files: metadata definition (Download example)
As in all previous examples, the information in the metadata section is common to all of the
assets in the portfolio.
The asset data can be provided in one or more csv files. The path to each of the csv files
containing the asset data must be listed between the <assets> and </assets> xml tags.
In the example shown above, the exposure information is provided in three csv files, Wash-
ington.csv, Oregon.csv, and California.csv. To illustrate the format of the csv files, we have
shown below the header and first few lines of the file Washington.csv in Table 7.1.
Note that the xml metadata section for exposure models provided using csv files must include
126 Chapter 7. Risk Input Models
id lon lat taxonomy number structural nonstructural contents area night occupancy state_id state county_id county tract
53041971200-AGR1-W1-LC -122.72877 46.51267 AGR1-W1-LC 7.6 898000 1046000 1945000 18 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-PC1-LC -122.72877 46.51267 AGR1-PC1-LC 0.6 67000 78000 146000 1 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-C2L-PC -122.72877 46.51267 AGR1-C2L-PC 0.6 67000 78000 146000 1 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-PC1-PC -122.72877 46.51267 AGR1-PC1-PC 1.5 179000 208000 387000 4 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-S2L-LC -122.72877 46.51267 AGR1-S2L-LC 0.6 67000 78000 144000 1 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-S1L-PC -122.72877 46.51267 AGR1-S1L-PC 1.1 133000 155000 289000 3 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-S2L-PC -122.72877 46.51267 AGR1-S2L-PC 1.5 182000 212000 394000 4 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-S3-PC -122.72877 46.51267 AGR1-S3-PC 1.1 133000 155000 289000 3 0.0 Agr 53 Washington 53041 Lewis County 53041971200
53041971200-AGR1-RM1L-LC -122.72877 46.51267 AGR1-RM1L-LC 0.6 68000 80000 148000 1 0.0 Agr 53 Washington 53041 Lewis County 53041971200
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
the xml tag <occupancyPeriods> listing the periods of day for which the number of
occupants in each asset will be listed in the csv files. In case the number of occupants are
not listed in the csv files, a self-closing tag <occupancyPeriods /> should be included in
the xml metadata section.
A web-based tool to build an exposure model in the NRML schema starting from a csv
file or a spreadsheet can be found at the OpenQuake platform at the following address:
https://platform.openquake.org/ipt/.
This section describes the schema currently used to store fragility models, which are required
for the Scenario Damage Calculator and the Classical Probabilistic Seismic Damage Calculator.
In order to perform probabilistic or scenario damage calculations, it is necessary to define a
fragility function for each building typology present in the exposure model. A fragility model
defines a set of fragility functions, describing the probability of exceeding a set of limit, or
damage, states. The fragility functions can be defined using either a discrete or a continuous
format, and the fragility model file can include a mix of both types of fragility functions.
For discrete fragility functions, sets of probabilities of exceedance (one set per limit state)
are defined for a list of intensity measure levels, as illustrated in Figure 7.1.
The fragility functions can also be defined as continuous functions, through the use of
cumulative lognormal distribution functions. In Figure 7.2, a continuous fragility model is
presented.
An example fragility model comprising one discrete fragility function and one continuous
fragility function is shown in Listing 46.
The initial portion of the schema contains general information that describes some general
aspects of the fragility model. The information in this metadata section is common to all
of the functions in the fragility model and needs to be included at the beginning of every
fragility model file. The parameters of the metadata section are shown in the snippet below
and described after the snippet:
7.2 Fragility Models 127
4 <fragilityModel id="fragility_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
27 </fragilityModel>
28
29 </nrml>
Listing 46 – Example fragility model comprising one discrete fragility function and one continuous
fragility function (Download example)
128 Chapter 7. Risk Input Models
1.0
Probability of exceedance
0.8
Limit States
0.6
ds1
ds2
ds3
0.4 ds4
0.2
0.0
0.00 0.75 1.50 2.25 3.00
PGA (g)
4 <fragilityModel id="fragility_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
• id: mandatory; a unique string used to identify the fragility model. This string can
contain letters (a–z; A–Z), numbers (0–9), dashes (-), and underscores (_), with a
maximum of 100 characters.
• assetCategory: an optional string used to specify the type of assets for which
fragility functions will be defined in this file (e.g: buildings, lifelines).
• lossCategory: mandatory; valid strings for this attribute are “structural”, “nonstruc-
tural”, “contents”, and “business_interruption”.
• description: mandatory; a brief string (ASCII) with further relevant information
about the fragility model, for example, which building typologies are covered or the
source of the functions in the fragility model.
• limitStates: mandatory; this field is used to define the number and nomenclature
of each limit state. Four limit states are employed in the example above, but it is
possible to use any number of discrete states, as long as a fragility curve is always
defined for each limit state. The limit states must be provided as a set of strings
separated by whitespaces between each limit state. Each limit state string can contain
letters (a–z; A–Z), numbers (0–9), dashes (-), and underscores (_). Please ensure that
there is no whitespace within the name of any individual limit state.
The following snippet from the above fragility model example file defines a discrete fragility
function:
7.2 Fragility Models 129
1.0
Probability of exceedance
0.8
Limit States
0.6
ds1
ds2
ds3
0.4 ds4
0.2
0.0
0.00 0.75 1.50 2.25 3.00
PGA (g)
imls. Finally, the number and names of the limit states in each fragility function must
be equal to the number of limit states defined earlier in the metadata section of the
fragility model using the attribute limitStates.
The following snippet from the above fragility model example file defines a continuous
fragility function:
oq-engine 1.7).
A deprecation warning is printed every time you attempt to use a fragility model in the old
NRML v0.4 format in an oq-engine 1.7 (or later) risk calculation. To get rid of the warning
you must upgrade the old fragility models files to NRML v0.5. You can use the command
upgrade_nrml with oq to do this as follows:
user@ubuntu:~\$ oq upgrade_nrml <directory-name>
The above command will upgrade all of your old fragility model files to NRML v0.5. The
original files will be kept, but with a .bak extension appended. Notice that you will need
to set the lossCategory attribute to its correct value manually. This is easy to do, since if
you try to run a computation you will get a clear error message telling the expected value
for the lossCategory for each file.
Several methodologies to derive fragility functions are currently being evaluated by
GEM and have been included as part of the Risk Modeller’s Toolkit, the code for
which can be found on a public repository at GitHub at the following address:
http://github.com/gemsciencetools/rmtk.
A web-based tool to build a fragility model in the NRML schema are also under de-
velopment, and can be found at the OpenQuake platform at the following address:
https://platform.openquake.org/ipt/.
Starting from OpenQuake-engine v1.7, the Scenario Damage calculator also accepts con-
sequence models in addition to fragility models, in order to estimate consequences based
on the calculated damage distribution. The user may provide one consequence model file
corresponding to each loss type (amongst structural, nonstructural, contents, and business
interruption) for which a fragility model file is provided. Whereas providing a fragility
model file for at least one loss type is mandatory for running a Scenario Damage calculation,
providing corresponding consequence model files is optional.
This section describes the schema currently used to store consequence models, which are
optional inputs for the Scenario Damage Calculator. A consequence model defines a set
of consequence functions, describing the distribution of the loss (or consequence) ratio
conditional on a set of discrete limit (or damage) states. These consequence function can
be currently defined in OpenQuake-engine by specifying the parameters of the continuous
distribution of the loss ratio for each limit state specified in the fragility model for the
corresponding loss type, for each taxonomy defined in the exposure model.
An example consequence model is shown in Listing 47.
The initial portion of the schema contains general information that describes some general
aspects of the consequence model. The information in this metadata section is common to
all of the functions in the consequence model and needs to be included at the beginning of
132 Chapter 7. Risk Input Models
4 <consequenceModel id="consequence_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
18 </consequenceModel>
19
20 </nrml>
4 <consequenceModel id="consequence_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
The following snippet from the above consequence model example file defines a consequence
function using a lognormal distribution to model the uncertainty in the consequence ratio
for each limit state:
11 <consequenceFunction id="RC_LowRise" dist="LN">
12 <params ls="slight" mean="0.04" stddev="0.00"/>
13 <params ls="moderate" mean="0.16" stddev="0.00"/>
14 <params ls="extensive" mean="0.32" stddev="0.00"/>
15 <params ls="complete" mean="0.64" stddev="0.00"/>
16 </consequenceFunction>
2
Note that as of OpenQuake-engine v1.8, the uncertainty in the consequence ratios is ignored, and only the
mean consequence ratios for the set of limit states is considered when computing the consequences from the
damage distribution. Consideration of the uncertainty in the consequence ratios is planned for future releases of
the OpenQuake-engine.
134 Chapter 7. Risk Input Models
1.00
0.75
Loss Ratio
0.50
0.25
0.00
0.0 0.5 1.0 1.5 2.0
PGA (g)
Note that although the uncertainty for each loss ratio is not represented in Figure 7.3, it can
be considered in the input file, by means of a coefficient of variation per loss ratio and a
probabilistic distribution, which can currently be set to lognormal (LN), Beta (BT); or by
specifying a discrete probability mass (PM)3 distribution of the loss ratio at a set of intensity
levels. An example of a vulnerability function that models the uncertainty in the loss ratio at
different intensity levels using a lognormal distribution is illustrated in Figure 7.4.
In general, defining vulnerability functions requires the user to specify the distribution of
the loss ratio for a set of intensity levels. The loss ratio distributions can be defined using
either a discrete or a continuous format, and the vulnerability model file can include a mix
of both types of vulnerability functions. It is also possible to define a vulnerability function
using a set of deterministic loss ratios corresponding to a set of intensity levels (i.e., ignoring
the uncertainty in the conditional loss ratios).
An example vulnerability model comprising three vulnerability functions is shown in List-
ing 48. This vulnerability model contains one function that uses the lognormal distribution
to represent the uncertainty in the loss ratio at different intensity levels, one function that
3
As of OpenQuake-engine v1.8, the “PM” option for defining vulnerability functions is supported by the
Scenario Risk and the Stochastic Event-Based Probabilistic Risk Calculators, but not by the Classical Probabilistic
Risk Calculator.
7.4 Vulnerability Models 135
1.2
0.6
0.3
0.0
0.0 0.5 1.0 1.5 2.0
PGA (g)
Figure 7.4 – Graphical representation of a vulnerability function that models the uncertainty in
the loss ratio using a lognormal distribution. The mean loss ratios and coefficients of variation are
illustrated for a set of intensity levels.
uses the Beta distribution, and one function that is defined using a discrete probability mass
distribution.
The initial portion of the schema contains general information that describes some general
aspects of the vulnerability model. The information in this metadata section is common to
all of the functions in the vulnerability model and needs to be included at the beginning
of every vulnerability model file. The parameters are illustrated in the snippet shown and
described below:
4 <vulnerabilityModel id="vulnerability_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
8 <description>vulnerability model</description>
• id: a unique string (ASCII) used to identify the vulnerability model. This string can
contain letters (a–z; A–Z), numbers (0–9), dashes (-), and underscores (_), with a
maximum of 100 characters.
• assetCategory: an optional string (ASCII) used to specify the type of assets for
which vulnerability functions will be defined in this file (e.g: buildings, lifelines).
• lossCategory: mandatory; valid strings for this attribute are “structural”, “nonstruc-
tural”, “contents”, “business_interruption”, and “occupants”.
• description: mandatory; a brief string with further information about the vulnera-
bility model, for example, which building typologies are covered or the source of the
functions in the vulnerability model.
The following snippet from the above vulnerability model example file defines a vulnera-
136 Chapter 7. Risk Input Models
4 <vulnerabilityModel id="vulnerability_example"
5 assetCategory="buildings"
6 lossCategory="structural">
7
8 <description>vulnerability model</description>
9
16
23
35 </vulnerabilityModel>
36
37 </nrml>
bility function modelling the uncertainty in the conditional loss ratios using a (continuous)
lognormal distribution:
The following attributes are needed to define a vulnerability function which uses a continuous
distribution to model the uncertainty in the conditional loss ratios:
• id: a unique string (ASCII) used to identify the taxonomy for which the function is
being defined. This string is used to relate the vulnerability function with the relevant
asset in the exposure model. This string can contain letters (a–z; A–Z), numbers (0–9),
dashes (-), and underscores (_), with a maximum of 100 characters.
• dist: mandatory; for vulnerability functions which use a continuous distribution to
model the uncertainty in the conditional loss ratios, this attribute should be set to
either “LN” if using the lognormal distribution, or to “BT” if using the Beta distribution.
• imls: mandatory; this attribute specifies the list of intensity levels for which the
parameters of the conditional loss ratio distributions will be defined. In addition, it is
also necessary to define the intensity measure type (imt).
• meanLRs: mandatory; this field is used to define the mean loss ratios for this vulner-
ability function for each of the intensity levels defined by the attribute imls. The
number of mean loss ratios defined by the meanLRs attribute must be equal to the
number of intensity levels defined by the attribute imls.
• covLRs: mandatory; this field is used to define the coefficient of variation for the
conditional distribution of the loss ratios for this vulnerability function for each of the
intensity levels defined by the attribute imls. The number of coefficients of variation
of loss ratios defined by the covLRs attribute must be equal to the number of intensity
levels defined by the attribute imls. The uncertainty in the conditional loss ratios can
be ignored by setting all of the covLRs for a given vulnerability function to zero.
The next snippet from the vulnerability model example file of Listing 48 defines a vulnerability
function which models the uncertainty in the conditional loss ratios using a (discrete)
probability mass distribution:
138 Chapter 7. Risk Input Models
The following attributes are needed to define a vulnerability function which uses a discrete
probability mass distribution to model the uncertainty in the conditional loss ratios:
• id: a unique string (ASCII) used to identify the taxonomy for which the function is
being defined. This string is used to relate the vulnerability function with the relevant
asset in the exposure model. This string can contain letters (a–z; A–Z), numbers (0–9),
dashes (-), and underscores (_), with a maximum of 100 characters.
• dist: mandatory; for vulnerability functions which use a discrete probability mass
distribution to model the uncertainty in the conditional loss ratios, this attribute should
be set to “PM”.
• imls: mandatory; this attribute specifies the list of intensity levels for which the
parameters of the conditional loss ratio distributions will be defined. In addition, it is
also necessary to define the intensity measure type (imt).
• probabilities: mandatory; this field is used to define the probability of observ-
ing a particular loss ratio (specified for each row of probabilities using the at-
tribute lr), conditional on the set of intensity levels specified using the attribute imls.
for this vulnerability function. Thus, the number of probabilities defined by each
probabilities attribute must be equal to the number of intensity levels defined by
the attribute imls. On the other hand, there is no limit to the number of loss ratios for
which probabilities can be defined. In the example shown here, notice that the
set of probabilities conditional on any particular intensity level, say, M M I = 8, sum
up to one.
Note that the schema for representing vulnerability models has changed between NRML v0.4
(used prior to oq-engine 1.7) and NRML v0.5 (introduced in oq-engine 1.7).
A deprecation warning is printed every time you attempt to use a vulnerability model in
the old NRML v0.4 format in an oq-engine 1.7 (or later) risk calculation. To get rid of the
warning you must upgrade the old vulnerability models files to NRML v0.5. You can use the
command upgrade_nrml with oq to do this as follows:
The above command will upgrade all of your old vulnerability model files to NRML v0.5.
The original files will be kept, but with a .bak extension appended. Notice that you will need
7.4 Vulnerability Models 139
to set the lossCategory attribute to its correct value manually. This is easy to do, since if
you try to run a computation you will get a clear error message telling the expected value
for the lossCategory for each file.
Several methodologies to derive vulnerability functions are currently being evaluated by
GEM and have been included as part of the Risk Modeller’s Toolkit, the code for which can
be found on a public repository at GitHub at: http://github.com/gemsciencetools/rmtk.
A web-based tool to build an vulnerability model in the NRML schema are also under
development, and can be found at the OpenQuake platform at the following address:
https://platform.openquake.org/ipt/.
Scenario Damage Calculator
Scenario Risk Calculator
Classical Probabilistic Seismic Damage Calculator
Classical Probabilistic Seismic Risk Calculator
Stochastic Event Based Seismic Risk Calculator
Retrofit Benefit-Cost Ratio Calculator
Exporting Risk Results
This Chapter summarises the structure of the information necessary to define the different
input data to be used with the OpenQuake-engine risk calculators. Input data for scenario-
based and probabilistic seismic damage and risk analysis using the OpenQuake-engine are
organised into:
• An exposure model file in the NRML format, as described in Section 7.1.
• A file describing the vulnerability model (Section 7.4) for loss calculations, or a file
describing the fragility model (Section 7.2) for damage calculations. Optionally, a
file describing the consequence model (Section 7.3) can also be provided in order to
calculate losses from the estimated damage distributions.
• A general calculation configuration file.
• Hazard inputs. These include hazard curves for the classical probabilistic damage and
risk calculators, ground motion fields for the scenario damage and risk calculators, or
stochastic event sets for the probabilistic event based calculators. As of OpenQuake-
engine v2.1, in general, there are five different ways in which hazard calculation
parameters or results can be provided to the OpenQuake-engine in order to run the
subsequent risk calculations:
– Use a single configuration file for running the hazard and risk calculations se-
quentially (preferred)
– Use separate configuration files for running the hazard and risk calculations
sequentially (legacy)
– Use a configuration file for the risk calculation along with all hazard outputs from
a previously completed, compatible OpenQuake-engine hazard calculation
– Use a configuration file for the risk calculation along with hazard input files in
the OpenQuake NRML format
The file formats for exposure models, fragility models, consequence models, and vulnerability
142 Chapter 8. Using the Risk Module
models have been described earlier in Chapter 7. The configuration file is the primary file
that provides the OpenQuake-engine information regarding both the definition of the input
models (e.g. exposure, site parameters, fragility, consequence, or vulnerability models) as
well as the parameters governing the risk calculation.
Information regarding the configuration file for running hazard calculations using the
OpenQuake-engine can be found in Section 3.4. Some initial mandatory parameters of
the configuration file common to all of the risk calculators are presented in Listing 49. The
remaining parameters that are specific to each risk calculator are discussed in subsequent
sections.
1 [general]
2 description = Example risk calculation
3 calculation_mode = scenario_risk
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [vulnerability]
9 structural_vulnerability_file = structural_vulnerability_model.xml
• description: a parameter that can be used to include some information about the
type of calculations that are going to be performed.
• calculation_mode: this parameter specifies the type of calculation to be run. Valid
options for the calculation_mode for the risk calculators are: scenario_damage,
scenario_risk, classical_damage, classical_risk, event_based_risk,
and classical_bcr.
• exposure_file: this parameter is used to specify the path to the exposure model
file.
Depending on the type of risk calculation, other parameters besides the aforementioned ones
may need to be provided. We illustrate in the following sections different examples of the
configuration file for the different risk calculators.
Example 1
This example illustrates a scenario damage calculation which uses a single configuration file
to first compute the ground motion fields for the given rupture model and then calculate
damage distribution statistics based on the ground motion fields. A minimal job configuration
file required for running a scenario damage calculation is shown in Listing 50.
8.1 Scenario Damage Calculator 143
job.ini
1 [general]
2 description = Scenario damage using a single config file
3 calculation_mode = scenario_damage
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [rupture]
9 rupture_model_file = rupture_model.xml
10 rupture_mesh_spacing = 2.0
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [hazard_calculation]
16 random_seed = 42
17 truncation_level = 3.0
18 maximum_distance = 200.0
19 gsim = BooreAtkinson2008
20 number_of_ground_motion_fields = 1000
21
22 [fragility]
23 structural_fragility_file = structural_fragility_model.xml
Listing 50 – Example combined configuration file for running a scenario damage calculation
(Download example)
144 Chapter 8. Using the Risk Module
After the calculation is completed, a message similar to the following will be displayed:
Note that one or more of the following parameters can be used in the same job configuration
file to provide the corresponding fragility model files:
• structural_fragility_file: a parameter used to define the path to a structural
fragility model file
• nonstructural_fragility_file: a parameter used to define the path to a non-
structural fragility model file
• contents_fragility_file: a parameter used to define the path to a contents
fragility model file
• business_interruption_fragility_file: a parameter used to define the path
to a business interruption fragility model file
It is important that the lossCategory parameter in the metadata section for each provided
fragility model file (“structural”, “nonstructural”, “contents”, or “business_interruption”)
should match the loss type defined in the configuration file by the relevant keyword above.
Example 2
This example illustrates a scenario damage calculation which uses separate configuration
files for the hazard and risk parts of a scenario damage assessment. The first configuration
file shown in Listing 51 contains input models and parameters required for the computation
of the ground motion fields due to a given rupture. The second configuration file shown in
8.1 Scenario Damage Calculator 145
Listing 52 contains input models and parameters required for the calculation of the damage
distribution for a portfolio of assets due to the ground motion fields.
job_hazard.ini
1 [general]
2 description = Scenario hazard example
3 calculation_mode = scenario
4
5 [rupture]
6 rupture_model_file = rupture_model.xml
7 rupture_mesh_spacing = 2.0
8
9 [sites]
10 sites_csv = sites.csv
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [hazard_calculation]
16 random_seed = 42
17 truncation_level = 3.0
18 maximum_distance = 200.0
19 gsim = BooreAtkinson2008
20 intensity_measure_types = PGA, SA(0.3)
21 number_of_ground_motion_fields = 1000
22 ground_motion_correlation_model = JB2009
23 ground_motion_correlation_params = {"vs30_clustering": True}
Listing 51 – Example hazard configuration file for a scenario damage calculation (Download
example)
In this example, the set of intensity measure types for which the ground motion fields
should be generated is specified explicitly in the configuration file using the parameter
intensity_measure_types. If the hazard calculation outputs are intended to be used as
inputs for a subsequent scenario damage or risk calculation, the set of intensity measure
types specified here must include all intensity measure types that are used in the fragility or
vulnerability models for the subsequent damage or risk calculation.
In the hazard configuration file illustrated above (Listing 51), the list of sites at which
the ground motion values will be computed is provided in a CSV file, specified using the
sites_csv parameter. The sites used for the hazard calculation need not be the same
as the locations of the assets in the exposure model used for the following risk calcula-
tion. In such cases, it is recommended to set a reasonable search radius (in km) using
the asset_hazard_distance parameter for the OpenQuake-engine to look for available
hazard values, as shown in the job_damage.ini example file above.
The only new parameters introduced in risk configuration file for this example (Listing 52) are
146 Chapter 8. Using the Risk Module
job_damage.ini
1 [general]
2 description = Scenario damage example
3 calculation_mode = scenario_damage
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [boundaries]
9 region = -123.0 38.3, -121.0 38.3, -121.0 36.5, -123.0 36.5
10
11 [hazard]
12 asset_hazard_distance = 20
13
14 [fragility]
15 structural_fragility_file = structural_fragility_model.xml
16
17 [risk_calculation]
18 time_event = night
Listing 52 – Example risk configuration file for a scenario damage calculation (Download example)
the hazard input site with the shortest distance from the asset location is associated
with the asset. It is possible that the associated hazard input site might be located
outside the polygon defined by the region.
• time_event: this parameter indicates the time of day at which the event occurs. The
values that this parameter can be set to are currently limited to one of the three strings:
day, night, and transit. This parameter will be used to compute the number of
fatalities based on the number of occupants present in the various assets at that time
of day, as specified in the exposure model.
Now, the above calculations described by the two configuration files “job_hazard.ini” and
“job_damage.ini” can be run separately. The calculation id for the hazard calculation should
be provided to the OpenQuake-engine while running the risk calculation using the option
--hazard-calculation-id (or --hc). This is shown below:
user@ubuntu:~\$ oq engine --run job_hazard.ini
After the hazard calculation is completed, a message similar to the one below will be displayed
in the terminal:
Calculation 2681 completed in 4 seconds. Results:
id | name
5072 | Ground Motion Fields
In the example above, the calculation id of the hazard calculation is 2681. There is only one
output from this calculation, i.e., the GMFs.
The risk calculation for computing the damage distribution statistics for the portfolio of
assets can now be run using:
After the calculation is completed, a message similar to the one listed above in Example 1
will be displayed.
In order to retrieve the calculation id of a previously run hazard calculation, the option
--list-hazard-calculations (or --lhc) can be used to display a list of all previously
run hazard calculations:
job_id | status | start_time | description
2609 | successful | 2015-12-01 14:14:14 | Mid Nepal earthquake
...
2681 | successful | 2015-12-12 10:00:00 | Scenario hazard example
The option --list-outputs (or --lo) can be used to display a list of all outputs generated
during a particular calculation. For instance,
id | name
5072 | Ground Motion Fields
Example 3
The example shown in Listing 53 illustrates a scenario damage calculation which uses a file
listing a precomputed set of GMFs. These GMFs can be computed using the OpenQuake-
engine or some other software. The GMFs must be provided in either the NRML schema or
the csv format as presented in Section 4.3.4. The damage distribution is computed based on
the provided GMFs. Listing 26 shows an example of a GMFs file in the NRML schema and
Table 4.4 shows an example of a GMFs file in the csv format. If the GMFs file is provided in
the csv format, an additional csv file listing the site ids must be provided using the parameter
sites_csv. See Table 4.5 for an example of the sites csv file, which provides the association
between the site ids in the GMFs csv file with their latitude and longitude coordinates.
job.ini
1 [general]
2 description = Scenario damage using user-defined ground motion fields (NRML)
3 calculation_mode = scenario_damage
4
5 [hazard]
6 gmfs_file = gmfs.xml
7
8 [exposure]
9 exposure_file = exposure_model.xml
10
11 [fragility]
12 structural_fragility_file = structural_fragility_model.xml
Listing 53 – Example configuration file for a scenario damage calculation using a precomputed set
of ground motion fields (Download example)
• gmfs_file: a parameter used to define the path to the GMFs file in the NRML schema.
This file must define GMFs for all of the intensity measure types used in the fragility
model.
• gmfs_csv: a parameter used to define the path to the GMFs file in the csv format.
This file must define GMFs for all of the intensity measure types used in the fragility
model. (Download an example file here).
• sites_csv: a parameter used to define the path to the sites file in the csv format.
This file must define site id, longitude, and latitude for all of the sites for the GMFs file
provided using the gmfs_csv parameter. (Download an example file here).
The above calculation(s) can be run using the command line:
5 [hazard]
6 sites_csv = sites.csv
7 gmfs_csv = gmfs.csv
8
9 [exposure]
10 exposure_file = exposure_model.xml
11
12 [fragility]
13 structural_fragility_file = structural_fragility_model.xml
Listing 54 – Example configuration file for a scenario damage calculation using a precomputed set
of ground motion fields (Download example)
Example 4
This example illustrates a the hazard job configuration file for a scenario damage calculation
which uses two GMPEs instead of only one. Currently, the set of GMPEs to be used for a
scenario calculation can be specified using a logic tree file, as demonstrated in 3.3.1. As of
OpenQuake-engine v1.8, the weights in the logic tree are ignored, and a set of GMFs will
be generated for each GMPE in the logic tree file. Correspondingly, damage distribution
statistics will be generated for each set of GMF.
The file shown in Listing 55 lists the two GMPEs to be used for the hazard calculation:
The only change that needs to be made in the hazard job configuration file is to replace the
gsim parameter with gsim_logic_tree_file, as demonstrated in Listing 56.
Example 5
This example illustrates a scenario damage calculation which specifies fragility models for
calculating damage to structural and nonstructural components of structures, and also
specifies consequence model files for calculation of the corresponding losses.
A minimal job configuration file required for running a scenario damage calculation followed
by a consequences analysis is shown in Listing 57.
Note that one or more of the following parameters can be used in the same job configuration
file to provide the corresponding consequence model files:
• structural_consequence_file: a parameter used to define the path to a struc-
tural consequence model file
• nonstructural_consequence_file: a parameter used to define the path to a
nonstructural consequence model file
• contents_consequence_file: a parameter used to define the path to a contents
consequence model file
150 Chapter 8. Using the Risk Module
gsim_logic_tree.xml
1 <?xml version="1.0" encoding="UTF-8"?>
2 <nrml xmlns:gml="http://www.opengis.net/gml"
3 xmlns="http://openquake.org/xmlns/nrml/0.5">
4
5 <logicTree logicTreeID="lt1">
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7 <logicTreeBranchSet uncertaintyType="gmpeModel"
8 branchSetID="bs1"
9 applyToTectonicRegionType="Active Shallow Crust">
10
11 <logicTreeBranch branchID="b1">
12 <uncertaintyModel>BooreAtkinson2008</uncertaintyModel>
13 <uncertaintyWeight>0.75</uncertaintyWeight>
14 </logicTreeBranch>
15
16 <logicTreeBranch branchID="b2">
17 <uncertaintyModel>ChiouYoungs2008</uncertaintyModel>
18 <uncertaintyWeight>0.25</uncertaintyWeight>
19 </logicTreeBranch>
20
21 </logicTreeBranchSet>
22 </logicTreeBranchingLevel>
23 </logicTree>
24
25 </nrml>
Listing 55 – Example ground motion logic tree for a scenario calculation (Download example)
8.1 Scenario Damage Calculator 151
job_hazard.ini
1 [general]
2 description = Scenario hazard example using multiple GMPEs
3 calculation_mode = scenario
4
5 [rupture]
6 rupture_model_file = rupture_model.xml
7 rupture_mesh_spacing = 2.0
8
9 [sites]
10 sites_csv = sites.csv
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [hazard_calculation]
16 random_seed = 42
17 truncation_level = 3.0
18 maximum_distance = 200.0
19 gsim_logic_tree_file = gsim_logic_tree.xml
20 intensity_measure_types = PGA, SA(0.3)
21 number_of_ground_motion_fields = 1000
22 ground_motion_correlation_model = JB2009
23 ground_motion_correlation_params = {"vs30_clustering": True}
Listing 56 – Example configuration file for a scenario damage calculation using a logic-tree file
(Download example)
152 Chapter 8. Using the Risk Module
job.ini
1 [general]
2 description = Scenario damage and consequences
3 calculation_mode = scenario_damage
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [rupture]
9 rupture_model_file = rupture_model.xml
10 rupture_mesh_spacing = 2.0
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [hazard_calculation]
16 random_seed = 42
17 truncation_level = 3.0
18 maximum_distance = 200.0
19 gsim = BooreAtkinson2008
20 number_of_ground_motion_fields = 1000
21 ground_motion_correlation_model = JB2009
22 ground_motion_correlation_params = {"vs30_clustering": True}
23
24 [fragility]
25 structural_fragility_file = structural_fragility_model.xml
26 nonstructural_fragility_file = nonstructural_fragility_model.xml
27
28 [consequence]
29 structural_consequence_file = structural_consequence_model.xml
30 nonstructural_consequence_file = nonstructural_consequence_model.xml
Listing 57 – Example configuration file for a scenario damage calculation followed by a consequences
analysis (Download example)
8.2 Scenario Risk Calculator 153
After the calculation is completed, a message similar to the following will be displayed:
Example 1
This example illustrates a scenario risk calculation which uses a single configuration file to
first compute the ground motion fields for the given rupture model and then calculate loss
statistics for structural losses and nonstructural losses, based on the ground motion fields.
The job configuration file required for running this scenario risk calculation is shown in
Listing 58.
Whereas a scenario damage calculation requires one or more fragility and/or consequence
models, a scenario risk calculation requires the user to specify one or more vulnerability
model files. Note that one or more of the following parameters can be used in the same job
configuration file to provide the corresponding vulnerability model files:
• structural_vulnerability_file: this parameter is used to specify the path to
the structural vulnerability model file
• nonstructural_vulnerability_file: this parameter is used to specify the path
to the nonstructuralvulnerability model file
• contents_vulnerability_file: this parameter is used to specify the path to the
contents vulnerability model file
• business_interruption_vulnerability_file: this parameter is used to spec-
ify the path to the business interruption vulnerability model file
154 Chapter 8. Using the Risk Module
job.ini
1 [general]
2 description = Scenario risk using a single config file
3 calculation_mode = scenario_risk
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [rupture]
9 rupture_model_file = rupture_model.xml
10 rupture_mesh_spacing = 2.0
11
12 [site_params]
13 site_model_file = site_model.xml
14
15 [hazard_calculation]
16 random_seed = 42
17 truncation_level = 3.0
18 maximum_distance = 200.0
19 gsim = BooreAtkinson2008
20 number_of_ground_motion_fields = 1000
21 ground_motion_correlation_model = JB2009
22 ground_motion_correlation_params = {"vs30_clustering": True}
23
24 [vulnerability]
25 structural_vulnerability_file = structural_vulnerability_model.xml
26 nonstructural_vulnerability_file = nonstructural_vulnerability_model.xml
27
28 [risk_calculation]
29 master_seed = 24
30 asset_correlation = 0.7
31
32 [risk_outputs]
33 all_losses = false
Listing 58 – Example combined configuration file for a scenario risk calculation (Download example)
8.2 Scenario Risk Calculator 155
After the calculation is completed, a message similar to the following will be displayed:
All of the different ways of running a scenario damage calculation as illustrated through the
examples of the previous section are also applicable to the scenario risk calculator, though
the examples are not repeated here.
Example 1
This example illustrates a classical probabilistic damage calculation which uses a single
configuration file to first compute the hazard curves for the given source model and ground
motion model and then calculate damage distribution statistics based on the hazard curves. A
minimal job configuration file required for running a classical probabilistic damage calculation
is shown in Listing 59.
The general parameters description and calculation_mode, and exposure_file
have already been described earlier in Section 8.1. The parameters related to the hazard
curves computation have been described earlier in Section 3.4.1.
In this case, the hazard curves will be computed at each of the locations of the assets in the
exposure model, for each of the intensity measure types found in the provided set of fragility
models. The above calculation can be run using the command line:
After the calculation is completed, a message similar to the following will be displayed:
Example 2
This example illustrates a classical probabilistic damage calculation which uses separate
configuration files for the hazard and risk parts of a classical probabilistic damage assessment.
The first configuration file shown in Listing 60 contains input models and parameters required
for the computation of the hazard curves. The second configuration file shown in Listing 61
contains input models and parameters required for the calculation of the probabilistic damage
distribution for a portfolio of assets based on the hazard curves and fragility models.
Now, the above calculations described by the two configuration files “job_hazard.ini” and
“job_damage.ini” can be run sequentially or separately, as illustrated in Example 2 in Sec-
8.3 Classical Probabilistic Seismic Damage Calculator 157
job.ini
1 [general]
2 description = Classical probabilistic damage using a single config file
3 calculation_mode = classical_damage
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [erf]
9 width_of_mfd_bin = 0.1
10 rupture_mesh_spacing = 2
11 area_source_discretization = 20
12
13 [site_params]
14 site_model_file = site_model.xml
15
16 [logic_trees]
17 source_model_logic_tree_file = source_model_logic_tree.xml
18 gsim_logic_tree_file = gsim_logic_tree.xml
19 number_of_logic_tree_samples = 0
20
21 [hazard_calculation]
22 random_seed = 42
23 investigation_time = 1
24 truncation_level = 3.0
25 maximum_distance = 200.0
26
27 [fragility]
28 structural_fragility_file = structural_fragility_model.xml
Listing 59 – Example combined configuration file for a classical probabilistic damage calculation
(Download example)
158 Chapter 8. Using the Risk Module
job_hazard.ini
1 [general]
2 description = Classical probabilistic hazard
3 calculation_mode = classical
4
5 [sites]
6 region = -123.0 38.3, -121.0 38.3, -121.0 36.5, -123.0 36.5
7 region_grid_spacing = 0.5
8
9 [erf]
10 width_of_mfd_bin = 0.1
11 rupture_mesh_spacing = 2
12 area_source_discretization = 20
13
14 [site_params]
15 site_model_file = site_model.xml
16
17 [logic_trees]
18 source_model_logic_tree_file = source_model_logic_tree.xml
19 gsim_logic_tree_file = gsim_logic_tree.xml
20 number_of_logic_tree_samples = 0
21
22 [hazard_calculation]
23 random_seed = 42
24 investigation_time = 1
25 truncation_level = 3.0
26 maximum_distance = 200.0
Listing 60 – Example hazard configuration file for a classical probabilistic damage calculation
(Download example)
8.4 Classical Probabilistic Seismic Risk Calculator 159
job_damage.ini
1 [general]
2 description = Classical probabilistic damage example
3 calculation_mode = classical_damage
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [hazard]
9 asset_hazard_distance = 20
10
11 [fragility]
12 structural_fragility_file = structural_fragility_model.xml
13
14 [risk_calculation]
15 risk_investigation_time = 50
16 steps_per_interval = 4
Listing 61 – Example risk configuration file for a classical probabilistic damage calculation (Down-
load example)
tion 8.1. The new parameters introduced in the above example configuration file are
described below:
• risk_investigation_time: an optional parameter that can be used in probabilistic
damage or risk calculations where the period of interest for the risk calculation is
different from the period of interest for the hazard calculation. If this parameter is not
explicitly set, the OpenQuake-engine will assume that the risk calculation is over the
same time period as the preceding hazard calculation.
• steps_per_interval: an optional parameter that can be used to specify whether
discrete fragility functions in the fragility models should be discretized further, and if
so, how many intermediate steps to use for the discretization. Setting
steps_per_interval = n
will result in the OpenQuake-engine discretizing the discrete fragility models using (n
- 1) linear interpolation steps between each pair of intensity level, poe points.
The default value of this parameter is one, implying no interpolation.
Example 1
This example illustrates a classical probabilistic risk calculation which uses a single configu-
ration file to first compute the hazard curves for the given source model and ground motion
model and then calculate loss exceedance curves based on the hazard curves. An example job
configuration file for running a classical probabilistic risk calculation is shown in Listing 62.
job.ini
1 [general]
2 description = Classical probabilistic risk using a single config file
3 calculation_mode = classical_risk
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [erf]
9 width_of_mfd_bin = 0.1
10 rupture_mesh_spacing = 2
11 area_source_discretization = 20
12
13 [site_params]
14 site_model_file = site_model.xml
15
16 [logic_trees]
17 source_model_logic_tree_file = source_model_logic_tree.xml
18 gsim_logic_tree_file = gsim_logic_tree.xml
19 number_of_logic_tree_samples = 0
20
21 [hazard_calculation]
22 random_seed = 42
23 investigation_time = 1
24 truncation_level = 3.0
25 maximum_distance = 200.0
26
27 [vulnerability]
28 structural_vulnerability_file = structural_vulnerability_model.xml
29 nonstructural_vulnerability_file = nonstructural_vulnerability_model.xml
Listing 62 – Example combined configuration file for a classical probabilistic risk calculation
(Download example)
Apart from the calculation mode, the only difference with the example job configuration file
shown in Example 1 of Section 8.3 is the use of a vulnerability model instead of a fragility
model.
As with the Scenario Risk calculator, it is possible to specify one or more vulnerability model
files in the same job configuration file, using the parameters:
• structural_vulnerability_file,
8.4 Classical Probabilistic Seismic Risk Calculator 161
• nonstructural_vulnerability_file,
• contents_vulnerability_file,
• business_interruption_vulnerability_file, and/or
• occupants_vulnerability_file
It is important that the lossCategory parameter in the metadata section for each provided
vulnerability model file (“structural”, “nonstructural”, “contents”, “business_interruption”,
or “occupants”) should match the loss type defined in the configuration file by the relevant
keyword above.
In this case, the hazard curves will be computed at each of the locations of the assets in
the exposure model, for each of the intensity measure types found in the provided set of
vulnerability models. The above calculation can be run using the command line:
After the calculation is completed, a message similar to the following will be displayed:
Example 2
This example illustrates a classical probabilistic risk calculation which uses separate config-
uration files for the hazard and risk parts of a classical probabilistic risk assessment. The
first configuration file shown in Listing 63 contains input models and parameters required
for the computation of the hazard curves. The second configuration file shown in Listing 64
contains input models and parameters required for the calculation of the loss exceedance
curves and probabilistic loss maps for a portfolio of assets based on the hazard curves and
vulnerability models.
Now, the above calculations described by the two configuration files “job_hazard.ini” and
“job_risk.ini” can be run sequentially or separately, as illustrated in Example 2 in Section 8.1.
The new parameters introduced in the above risk configuration file example (Listing 64) are
described below:
• lrem_steps_per_interval: this parameter controls the number of intermediate
values between consecutive loss ratios (as defined in the vulnerability model) that are
considered in the risk calculations. A larger number of loss ratios than those defined
in each vulnerability function should be considered, in order to better account for
the uncertainty in the loss ratio distribution. If this parameter is not defined in the
configuration file, the OpenQuake-engine assumes the lrem_steps_per_interval
to be equal to 5. More details are provided in the OpenQuake Book (Risk).
• quantile_loss_curves: this parameter can be used to request the computation of
162 Chapter 8. Using the Risk Module
job_hazard.ini
1 [general]
2 description = Classical probabilistic hazard
3 calculation_mode = classical
4
5 [sites]
6 region = -123.0 38.3, -121.0 38.3, -121.0 36.5, -123.0 36.5
7 region_grid_spacing = 0.5
8
9 [erf]
10 width_of_mfd_bin = 0.1
11 rupture_mesh_spacing = 2
12 area_source_discretization = 20
13
14 [site_params]
15 site_model_file = site_model.xml
16
17 [logic_trees]
18 source_model_logic_tree_file = source_model_logic_tree.xml
19 gsim_logic_tree_file = gsim_logic_tree.xml
20 number_of_logic_tree_samples = 0
21
22 [hazard_calculation]
23 random_seed = 42
24 investigation_time = 1
25 truncation_level = 3.0
26 maximum_distance = 200.0
Listing 63 – Example hazard configuration file for a classical probabilistic risk calculation (Download
example)
8.4 Classical Probabilistic Seismic Risk Calculator 163
job_risk.ini
1 [general]
2 description = Classical probabilistic risk
3 calculation_mode = classical_risk
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [hazard]
9 asset_hazard_distance = 20
10
11 [vulnerability]
12 structural_vulnerability_file = structural_vulnerability_model.xml
13 nonstructural_vulnerability_file = nonstructural_vulnerability_model.xml
14
15 [risk_calculation]
16 risk_investigation_time = 50
17 lrem_steps_per_interval = 2
18
19 [risk_outputs]
20 quantile_hazard_curves = 0.15, 0.50, 0.85
21 conditional_loss_poes = 0.02, 0.10
Listing 64 – Example risk configuration file for a classical probabilistic risk calculation (Download
example)
164 Chapter 8. Using the Risk Module
quantile loss curves for computations involving non-trivial logic trees. The quantiles
for which the loss curves should be computed must be provided as a comma separated
list. If this parameter is not included in the configuration file, quantile loss curves will
not be computed.
• conditional_loss_poes: this parameter can be used to request the computation of
probabilistic loss maps, which give the loss levels exceeded at the specified probabilities
of exceedance over the time period specified by risk_investigation_time. The
probabilities of exceedance for which the loss maps should be computed must be pro-
vided as a comma separated list. If this parameter is not included in the configuration
file, probabilistic loss maps will not be computed.
Example 1
This example illustrates a stochastic event based risk calculation which uses a single configu-
ration file to first compute the SESs and GMFs for the given source model and ground motion
model, and then calculate event loss tables, loss exceedance curves and probabilistic loss
maps for structural losses, nonstructural losses and occupants, based on the GMFs. The job
configuration file required for running this stochastic event based risk calculation is shown
in Listing 65.
Similar to that the procedure described for the Scenario Risk calculator, a Monte Carlo sam-
pling process is also employed in this calculator to take into account the uncertainty in the con-
ditional loss ratio at a particular intensity level. Hence, the parameters asset_correlation
and master_seed may be defined as previously described for the Scenario Risk calculator
in Section 8.2. The parameter “risk_investigation_time” specifies the time period for which
the event loss tables and loss exceedance curves will be calculated, similar to the Classical
Probabilistic Risk calculator. If this parameter is not provided in the risk job configuration
file, the time period used is the same as that specifed in the hazard calculation using the
parameter “investigation_time”.
The new parameters introduced in this example are described below:
• minimum_intensity: this optional parameter specifies the minimum intensity levels
for each of the intensity measure types in the risk model. Ground motion fields where
each ground motion value is less than the specified minimum threshold are discarded.
This helps speed up calculations and reduce memory consumption by considering only
8.5 Stochastic Event Based Seismic Risk Calculator 165
job.ini
1 [general]
2 description = Stochastic event based risk using a single job file
3 calculation_mode = event_based_risk
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [site_params]
9 site_model_file = site_model.xml
10
11 [erf]
12 width_of_mfd_bin = 0.1
13 rupture_mesh_spacing = 2.0
14 area_source_discretization = 10.0
15
16 [logic_trees]
17 source_model_logic_tree_file = source_model_logic_tree.xml
18 gsim_logic_tree_file = gsim_logic_tree.xml
19
20 [correlation]
21 ground_motion_correlation_model = JB2009
22 ground_motion_correlation_params = {"vs30_clustering": True}
23
24 [hazard_calculation]
25 random_seed = 24
26 truncation_level = 3
27 maximum_distance = 200.0
28 investigation_time = 1
29 number_of_logic_tree_samples = 0
30 ses_per_logic_tree_path = 100000
31 minimum_intensity = {"PGA": 0.05, "SA(0.4)": 0.10, "SA(0.8)": 0.12}
32
33 [vulnerability]
34 structural_vulnerability_file = structural_vulnerability_model.xml
35 nonstructural_vulnerability_file = nonstructural_vulnerability_model.xml
36
37 [risk_calculation]
38 master_seed = 42
39 risk_investigation_time = 1
40 asset_correlation = 0.0
41 return_periods = [5, 10, 25, 50, 100, 250, 500, 1000]
42
43 [risk_outputs]
44 avg_losses = true
45 quantile_hazard_curves = 0.15, 0.50, 0.85
46 conditional_loss_poes = 0.02, 0.10
Listing 65 – Example combined configuration file for running a stochastic event based risk calculation
(Download example)
166 Chapter 8. Using the Risk Module
those ground motion fields that are likely to contribute to losses. It is also possible
to set the same threshold value for all intensity measure types by simply providing
a single value to this parameter. For instance: “minimum_intensity = 0.05” would
set the threshold to 0.05 g for all intensity measure types in the risk calculation. If
this parameter is not set, the OpenQuake-engine extracts the minimum thresholds for
each intensity measure type from the vulnerability models provided, picking the first
intensity value for which the mean loss ratio is nonzero.
• return_periods: this parameter specifies the list of return periods (in years) for
computing the aggregate loss curve. If this parameter is not set, the OpenQuake-engine
uses a default set of return periods for computing the loss curves. The default return
periods used are from the list: [5, 10, 25, 50, 100, 250, 500, 1000, ...], with its upper
bound limited by (ses_per_logic_tree_path × investigation_time)
• avg_losses: this boolean parameter specifies whether the average asset losses over
the time period “risk_investigation_time” should be computed. The default value of
this parameter is true.
Computation of the loss tables, loss curves, and average losses for each individual asset in
the exposure model can be resource intensive, and thus these outputs are not generated by
default, unless instructed to by using the parameters described above.
Starting from oq-engine 2.8, users may also begin a stochastic event based risk calculation
by providing a precomputed set of GMFs to the oq-engine. The following example describes
the procedure for this approach.
Example 2
This example illustrates a stochastic event based risk calculation which uses a file listing a
precomputed set of GMFs. These GMFs can be computed using the OpenQuake-engine or
some other software. The GMFs must be provided in either the NRML schema or the csv
format as presented in Section 4.3.3. Listing 25 shows an example of a GMFs file in the
NRML schema and Table 4.1 shows an example of a GMFs file in the csv format. If the GMFs
file is provided in the csv format, an additional csv file listing the site ids must be provided
using the parameter sites_csv. See Table 4.5 for an example of the sites csv file, which
provides the association between the site ids in the GMFs csv file with their latitude and
longitude coordinates.
Starting from the input GMFs, the oq-engine can calculate event loss tables, loss exceedance
8.6 Retrofit Benefit-Cost Ratio Calculator 167
curves and probabilistic loss maps for structural losses, nonstructural losses and occupants.
The job configuration file required for running this stochastic event based risk calculation
starting from a precomputed set of GMFs is shown in Listing 66.
job.ini
1 [general]
2 description = Stochastic event based risk using precomputed gmfs
3 calculation_mode = event_based_risk
4
5 [hazard]
6 sites_csv = sites.csv
7 gmfs_csv = gmfs.csv
8 investigation_time = 50
9
10 [exposure]
11 exposure_file = exposure_model.xml
12
13 [vulnerability]
14 structural_vulnerability_file = structural_vulnerability_model.xml
15
16 [risk_calculation]
17 risk_investigation_time = 1
18 return_periods = [5, 10, 25, 50, 100, 250, 500, 1000]
19
20 [risk_outputs]
21 avg_losses = true
22 quantile_hazard_curves = 0.15, 0.50, 0.85
23 conditional_loss_poes = 0.02, 0.10
Listing 66 – Example combined configuration file for running a stochastic event based risk calculation
starting from a precomputed set of ground motion fields (Download example)
Example 1
This example illustrates a classical probabilistic retrofit benefit-cost ratio calculation which
uses a single configuration file to first compute the hazard curves for the given source model
and ground motion model, then calculate loss exceedance curves based on the hazard curves
using both the original vulnerability model and the vulnerability model for the retrofitted
structures, then calculate the reduction in average annual losses due to the retrofits, and
168 Chapter 8. Using the Risk Module
finally calculate the benefit-cost ratio for each asset. A minimal job configuration file required
for running a classical probabilistic retrofit benefit-cost ratio calculation is shown in Listing 67.
job.ini
1 [general]
2 description = Classical cost-benefit analysis using a single config file
3 calculation_mode = classical_bcr
4
5 [exposure]
6 exposure_file = exposure_model.xml
7
8 [erf]
9 width_of_mfd_bin = 0.1
10 rupture_mesh_spacing = 2
11 area_source_discretization = 20
12
13 [site_params]
14 site_model_file = site_model.xml
15
16 [logic_trees]
17 source_model_logic_tree_file = source_model_logic_tree.xml
18 gsim_logic_tree_file = gsim_logic_tree.xml
19 number_of_logic_tree_samples = 0
20
21 [hazard_calculation]
22 random_seed = 42
23 investigation_time = 1
24 truncation_level = 3.0
25 maximum_distance = 200.0
26
27 [vulnerability]
28 structural_vulnerability_file = structural_vulnerability_model.xml
29 structural_vulnerability_retrofitted_file = retrofit_vulnerability_model.xml
30
31 [risk_calculation]
32 interest_rate = 0.05
33 asset_life_expectancy = 50
34 lrem_steps_per_interval = 1
Listing 67 – Example configuration file for a classical probabilistic retrofit benefit-cost ratio calcu-
lation (Download example)
The new parameters introduced in the above example configuration file are described below:
• vulnerability_retrofitted_file: this parameter is used to specify the path to
the vulnerability model file containing the vulnerability functions for the retrofitted
asset
• interest_rate: this parameter is used in the calculation of the present value of
8.7 Exporting Risk Results 169
After the calculation is completed, a message similar to the following will be displayed:
To obtain a list of all risk calculations that have been previously run (successfully or unsuc-
cessfully), or are currently running, the following command can be employed:
or simply:
Then, in order to display a list of the risk outputs from a given job, the following command
can be used:
user@ubuntu:~\$ oq engine --list-outputs <risk_calculation_id>
or simply:
which will display a list of outputs for the calculation requested, as presented below:
Calculation 4 results:
id | name
29 | loss_curve
30 | loss_map
Then, in order to export all of the risk calculation outputs in the appropriate xml format, the
following command can be used:
170 Chapter 8. Using the Risk Module
or simply:
If, instead of exporting all of the outputs from a particular calculation, only particular output
files need to be exported, this can be achieved by using the --export-output option and
providing the id of the required output:
or simply:
9. Risk Results
This following sections describe the different output files produced by the risk calculators.
The output file lists the mean and standard deviation of the number of buildings in each
damage state for each asset in the exposure model for all loss types (amongst ‘structural”,
“nonstructural”, “contents”, or “business_interruption”) for which a consequence model file
was also provided in the configuration file in addition to the corresponding fragility model
9.1 Scenario Damage Outputs 173
file.
If the OpenQuake-QGIS IRMT plugin is used for visualizing or exporting the results, the
Scenario Damage calculator can also estimate the expected total number of buildings of a
certain combination of tags in each damage state and made available for export as a csv file.
This distribution of damage per building tag is depicted in the example output file snippet in
Table 9.2.
Table 9.2 – Example of a scenario damage distribution per tag output file
The output file lists the mean of the total number of buildings in each damage state for each
tag found in the exposure model for all loss types (amongst “structural”, “nonstructural”,
“contents”, or “business_interruption”).
Finally, a total damage distribution output file can also be generated if the OpenQuake-
QGIS IRMT plugin is used for visualizing or exporting the results from a Scenario Damage
Calculation, which will contain the mean and standard deviation of the total number of
buildings in each damage state, as illustrated in the example file in Table 9.2.
The output file lists consequence statistics for all loss types (amongst “structural”, “nonstruc-
tural”, “contents”, or “business_interruption”) for which a consequence model file was also
provided in the configuration file in addition to the corresponding fragility model file.
Table 9.7 – Example of a scenario loss distribution per tag output file
and selected by the for all loss types (amongst “structural”, “nonstructural”, “contents”, or
“business_interruption”) for which a vulnerability model file was provided in the configuration
file.
If the OpenQuake-QGIS IRMT plugin is used for visualizing or exporting the results from
a Scenario Risk Calculation, the mean total loss and associated standard deviation for the
selected earthquake rupture will be computed and made available for export as a csv file, as
illustrated in the example shown in Table 9.8.
The losses by event output lists the total losses for each realization of the scenario generated
in the Monte Carlo simulation process for all loss types for which a vulnerability model file
was provided in the configuration file. These results are exported in a comma separate value
(.csv) file as illustrated in the example shown in Table 9.9.
This file lists the expected number of structural units in each damage state for each asset, for
the time period specified by the parameter risk_investigation_time.
calculator used and the options defined before running a probabilistic risk calculation, one
or more of the sets of loss exceedance curves described in the following subsections will be
generated for all loss types (amongst “structural”, “nonstructural”, “contents”, “occupants”, or
“business_interruption”) for which a vulnerability model file was provided in the configuration
file.
Based Probabilistic Risk Calculator (if the parameter “loss_ratios” is defined in the configura-
tion file). The quantiles for which loss curves will be calculated should have been defined in
the job configuration file for the calculation using the parameter quantile_loss_curves.
The structure of the file is identical to that of the individual asset loss exceedance curve
output file.
Aggregate loss exceedance curves are generated only by the Stochastic Event- Based Proba-
bilistic Risk Calculator and describe the probabilities of exceedance of the total loss across
the entire portfolio for a set of loss values within a given time span (or investigation interval).
These results are exported in a comma separate value (.csv) file as illustrated in the example
shown in Table 9.12.
Same as described previously for individual assets, mean aggregate loss exceedance curves
and quantile aggregate loss exceedance curves will also be generated when relevant.
A probabilistic loss map contains the losses that have a specified probability of exceedance
within a given time span (or investigation interval) throughout the region of interest. This
result can be generated using either the Stochastic Event-Based Probabilistic Risk Calculator
or the Classical Probabilistic Risk Calculator.
The file snippet included in Table 9.13. shows an example probabilistic loss map output file.
180 Chapter 9. Risk Results
annual loss for the retrofitted (aalRetr) configuration of the assets are provided.
Scenario Damage Demos
Scenario Risk Demos
Classical Probabilistic Seismic Damage Demos
Classical Probabilistic Seismic Risk Demos
Event Based Probabilistic Seismic Risk Demos
Retrofit Benefit-Cost Ratio Demos
The following sections describe the set of demos that have been compiled to demonstrate some
of the features and usage of the risk calculators of the OpenQuake-engine. These demos can
be found in a public repository on GitHub at the following link: https://github.com/gem/oq-
engine/tree/master/demos/risk. Furthermore, a folder containing all of these demonstrative
examples is provided when an OATS (OpenQuake Alpha Testing Service) account is requested,
and it is also part of the oq-engine virtual image package.
These examples are purely demonstrative and are not intended to represent accurately the
seismicity, vulnerability or exposure characteristics of the region of interest, but simply to
provide example input files that can be used as a starting point for users planning to employ
the OpenQuake-engine in seismic risk and loss estimation studies.
It is also noted that in the demonstrative examples presented in this section, illustrations
about the various messages from the engine displayed in the command line interface are
presented. These messages often contain information about the calculation id and output id,
which will certainly be different for each user.
Following is the list of demos which illustrate how to use the oq-engine for various scenario-
based and probabilistic seismic risk analyses:
• ClassicalBCR
• ClassicalDamage
• ClassicalRisk
• EventBasedRisk
• ScenarioDamage
• ScenarioRisk
These six demos use Nepal as the region of interest. An example exposure model has been
developed for this region, comprising 9,144 assets distributed amongst 2,221 locations (due
to the existence of more than one asset at the same location). A map with the distribution of
184 Chapter 10. Demonstrative Examples
80˚ 81˚ 82˚ 83˚ 84˚ 85˚ 86˚ 87˚ 88˚ 89˚
31˚ 31˚
30˚ 30˚
29˚ 29˚
28˚ 28˚
27˚ 27˚
26˚ 26˚
80˚ 81˚ 82˚ 83˚ 84˚ 85˚ 86˚ 87˚ 88˚ 89˚
The building portfolio was organised into four classes for the rural areas (adobe, dressed
stone, unreinforced fired brick, wooden frames), and five classes for the urban areas (the
aforementioned typologies, in addition to reinforced concrete buildings). For each one
of these building typologies, vulnerability functions and fragility functions were collected
from the published literature available for the region. These input models are only for
demonstrative purposes and for further information about the building characteristics of
Nepal, users are advised to contact the National Society for Earthquake Technology of Nepal
(NSET - http:www.nset.org.np/).
The following sections include instructions not only on how to run the risk calculations,
but also on how to produce the necessary hazard inputs. Thus, each demo comprises the
configuration file, exposure model and fragility or vulnerability models fundamental for the
risk calculations. Each demo folder also a configuration file and the input models to produce
the relevant hazard inputs.
A rupture of magnitude Mw 7 in the central part of Nepal is considered in this demo. The
characteristics of this rupture (geometry, dip, rake, hypocentre, upper and lower seismogenic
depth) are defined in the fault_rupture.xml file, and the hazard and risk calculation
settings are specified in the job.ini file.
To run the Scenario Damage demo, users should navigate to the folder where the required
files have been placed and employ following command:
10.2 Scenario Risk Demos 185
The risk job calculates the probabilistic damage distribution for each asset in the exposure
model starting from the above generated hazard curves. The following command launches
the risk calculations:
user@ubuntu:~\$ oq engine --run job_risk.ini --hc 8971
The same hazard input as described in the Classical Probabilistic Damage demo is used for
this demo. Thus, the workflow to produce the set of hazard curves described in Section 10.3
is also valid herein. Then, to run the Classical Probabilistic Risk demo, users should navigate
to the folder containing the demo input models and configuration files and employ the
following command:
In this demo, loss exceedance curves for each asset and two probabilistic loss maps (for
probabilities of exceedance of 1% and 10%) are produced. The following command launches
these risk calculations:
user@ubuntu:~\$ oq engine --run job_risk.ini --hc 8971
Articles
Chiou, B. S.-J. and R. R. Youngs (2008). “An NGA Model for the Average Horizontal Compo-
nent of Peak Ground Motion and Response Spectra”. In: Earthquake Spectra 24, pages 173–
215.
Chiou, B. S.-J. and R. R. Youngs (2014). “Update of the Chiou and Youngs NGA model for
the average horizontal component of peak ground motion and response spectra”. In:
Earthquake Spectra 30.3, pages 1117–1153.
Cornell, C. A. (1968). “Engineering Seismic Risk Analysis”. In: Bulletin of the Seismological
Society of America 58, pages 1583–1606.
Field, E. H., T. H. Jordan, and C. A. Cornell (2003). “OpenSHA - A developing Community-
Modeling Environment for Seismic Hazard Analysis”. In: Seismological Research Letters
74, pages 406–419.
Frankel, A. (1995). “Mapping Seismic Hazard in the Central and Eastern United States”. In:
Seismological Research Letters 66.4, pages 8–21.
Schwartz, D. P. and K. J. Coppersmith (1984). “Fault Behaviour and Characteristic Earth-
quakes: Examples from the Wasatch and San Andreas fault zones”. In: Journal of Geo-
physical Research 89.B7, pages 5681–5698.
Strasser, F. O., M. C. Arango, and J. J. Bommer (2010). “Scaling of the Source Dimensions
of Interface and Intraslab Subduction-zone Earthquakes with Moment Magnitude”. In:
Seismological Research Letters 81, pages 941–950.
Thingbaijam, K. K. S. and G. K. Mai P.M. (2017). “New Empirical Earthquake Source-Scaling
Laws”. In: Bulletin of the Seismological Society of America 107.5, pages 2225–2946. doi:
10.1785/0120170017.
Wells, D. L. and K. J. Coppersmith (1994). “New Empirical Relationships among Magnitude,
Rupture Length, Rupture Width, Rupture Area, and Surface Displacement”. In: Bulletin
of the Seismological Society of America 84.4, pages 974–1002.
Woo, G. (1996). “Kernel Estimation Methods for Seismic Hazard Area Source Modeling”. In:
Bulletin of the Seismological Society of America 86.2, pages 353–362.
Youngs, R. R. and K. J. Coppersmith (1985). “Implications of fault slip rates and earth-
quake recurrence models to probabilistic seismic hazard estimates”. In: Bulletin of the
Seismological Society of America 75, pages 939–964.
190 Chapter 10. Demonstrative Examples
Other Sources
EPRI, E. P. R. I. (2011). Technical Report: Central and Eastern United States Seismic Source
Characterisation for Nuclear Facilities. Report. EPRI, Palo Alto, CA. U. S. DoE, and U. S.
NERC.
McGuire, K. K. (1976). FORTRAN computer program for seismic risk analysis. Open-File report
76-67. 102 pages. United States Department of the Interior, Geological Survey.
Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler,
R. L. Wesson, Y. Yzeng, O. S. Boys, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills, and
K. S. Rukstales (2008). Documentation for the 2008 Update of the United States National
Seismic Hazard Maps. Open File Report 2008-1128. U.S. Department of the Interior, U.S.
Geological Survey.
Index