Decision Risk
Decision Risk
Decision Risk
This chapter introduces the theories of decision-making under uncertainty and risk
of socio-technical systems. Following the historic development of the main concep-
tions of rationality, we start with expected utility theories and explain the rational
choice (or normative) perspective. We explain how decisions under risk can be opti-
mized consistently within the framework of the theory, and under which conditions
such analyses are particularly applicable and when they are reduced to an economic
cost-benefit analysis. It is then discussed why the classic theories are sometimes
misused and why the normative perspective is not suitable to describe or predict
actual human behavior, perception or evaluation of decisions and their outcomes
under uncertainty and risk. We then outline alternative theories of decision-making,
including descriptive approaches from behavioral economics (e.g. cognitive biases)
as well as ecological rationality and heuristic decision making. As is discussed in
this article, the normative approach is suited for optimizing decisions in a consistent
manner for relatively well defined (often technical) problems, whereas the alterna-
tive theories are more suitable to predict actual human and social evaluations and
behavior and can provide improved decision making in complex situations where
socio-technical system parameters as well as the decision maker’s preferences are
not well defined.
D. Straub
Engineering Risk Analysis Group, Faculty of Civil, Geo and Environmental Engineering,
Technische Universität München, Theresienstr. 90, 80333 Munich, Germany
I. Welpe (B)
Chair for Strategy and Organization, TUM School of Management, Technische Universität
München, Arcisstr. 21, 80798 Munich, Germany
e-mail: welpe@tum.de
The Facts
• In theories of judgment and decision making one has to distinguish between how
people should make decisions (idealistic, normative approaches) and how people
actually make decisions (realistic, descriptive approaches).
• Normative decision theory assumes that under certain circumstances decision
makers (should) follow a certain set of rules that ensures consistency among deci-
sions as well as optimal decision outcomes. Descriptive decision theory accounts
for the fact that people do not follow these rules and for such situations in which
optimal set of rules cannot be given.
• Normative decision theory is applicable to well defined and contained (often tech-
nical) problems, and can be used to optimize risk levels. A number of tools, in-
cluding decision trees and graphs, exist. It can also be used to optimize the amount
of information that should be collected to reduce uncertainty before making the
decision.
• The utility function describes decision maker’s preferences. It is an empirical
function that can differ between individuals and is influenced by subjective per-
ceptions. No mathematical form of the utility function is justified by some “uni-
versal law”.
• Different from what the classical normative theory would propose, the subjec-
tive, observer-dependent perception of “objective” values and probabilities has a
strong impact on human perceptions, evaluations and decisions. The normative
theory therefore generally fails to accurately recognize, describe or predict actual
decision making under risk and uncertainty.
• When optimization is not possible, people often make good decisions through the
use of heuristics and “gut feelings”.
• Most risks are embedded into socio-technical systems, thus is it advisable to be
familiar with and use both normative and descriptive risk decision theories.
• There is no “fixed formula” for ideal decision making under risk and uncertainty.
1 Introduction
Decision making under conditions of uncertainty and risk is an every-day task.
When deciding whether or not to take the umbrella upon leaving the house, when
deciding on whether or not to wear a helmet for bicycling or when deciding whether
to take the train or the airplane, you are making a decision that involves outcomes
that are uncertain (Will it rain? Will you be hit by a car? Will the train or the plane
be safer?) and that are associated with risks (of catching a cold; of sustaining in-
juries). In our every-day life, we often use intuition (also called heuristics or gut
feeling—see Sect. 3) to make such decisions, which often works well. As profes-
sionals dealing with risk and uncertainty we often have to make complex and far-
reaching decisions or advise the ones that make those decisions, e.g. a committee
of experts in health risk that must make a recommendation on acceptable levels of
air pollution, a team of engineers that must determine the optimal flood protection
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 65
strategy for a city or a team of corporate manager that must weigh the economic
risks against the technical risks in the introduction of new products and technolo-
gies. Even as individuals we frequently must decide between decision alternatives
involving uncertainty on which we have little experience and intuition, for example
as a patient between different treatment options, as we save for retirement, between
different investment strategies or in private life when deciding for or against a life
partner. Decision theory has been developed to describe and model the process of
making such decisions and ideally supports us in identifying the best options.
Decision theory started out by assuming that the outcomes of decisions can
be assessed following a set of consistent decision rules (often—and somewhat
misleadingly—referred to as “rational decision making”). Based on these rules, it
is then possible to mathematically identify optimal decisions under conditions of
uncertainty. Today, this theory is called the normative decision theory, because it is
useful in describing how decisions should ideally be made under some idealistic,
objective and observer-independent assumptions (compare Sect. 3.2), which will be
discussed in this article. When studying the behavior of decision makers, it is ob-
served that people’s assumptions and resulting actions are not consistent with the
assumptions and rules of the normative decision theory. Instead, decisions made by
people are influenced by a number of cognitive, motivational, affective and a number
of other factors that are not addressed by the classical normative theory. Decisions
associated with risk and uncertainty are often concerned with socio-technical sys-
tems of some sort, in which human, social and technical dimensions continuously
interact. In order to understand, model and reduce risk in these anthropogenic sys-
tems, it is necessary to understand how people involved in the process actually per-
ceive, evaluate and decide about risk, which is the aim of descriptive decision theory
that concerns itself with the empirical reality of how people think and decide.
Examples for the application of the normative theory in risk management include
the optimization of decisions on the optimal level of flood protection for a city based
on probabilistic models of future flood events and infrastructure performance, or
decisions on optimal levels of insurance and reinsurance coverage. Examples for
the application of the descriptive theory arise when dealing with processes whose
outcomes substantially depend on the perceptions, evaluations, decisions and inter-
ventions of humans. For example, consumers decide if genetically modified food is
safe for them to buy and eat, or if nuclear energy is an acceptable form of energy
technology.
As described in the above paragraphs, in this chapter we distinguish between the
normative and the descriptive decision theory. Normative decision analysis uses a
mathematical modeling approach based on the expected utility theory (sometimes
also called normative, prescriptive, rational or economical decision analysis) and
provides a framework for analyzing the optimality of decisions when knowledge
of the probability and consequences involved in the decision is available or can
be approximated. Descriptive or behavioral decision analysis supports risk-related
decisions in complex, socio-technical systems that involve uncertainties with regard
to probability and outcomes that make exact quantification difficult. Using either
normative or descriptive decision theory in isolation gives an incomplete assessment
66 D. Straub and I. Welpe
of the realities of the risk situation. Risk management in socio-technical systems and
situations should always consider both normative and descriptive aspects of decision
analysis. Risk managers and decision makers need thus be familiar with different
risk theories and perceptions.
Section 2 of this chapter presents an introduction to the normative theory while
Sect. 3 introduces the descriptive theory. Finally, Sect. 4 concludes with a compari-
son of the main theories with regard to their assumptions, approach, decision criteria
and applicability.
1 We note that at least two reasons for this preference can be distinguished: (1) Rules and numbers
allow for an “objective” and “true” assessment of risks, probabilities and outcomes. (2) In social
interactions, the legitimacy and acceptability of decisions is increased by justifying them through
the use of (sometimes just seemingly) objective and true assessment of risks, probabilities and
outcomes.
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 67
Normative decision analysis requires a model of the relevant system and time frame,
the identification of possible decision alternatives and the probabilities and out-
comes as well as a measure for evaluating the optimality of the decision alternatives.
For engineering problems, the relevant system is typically represented by physical,
chemical and/or logical models with input and output variables, some of which are
uncertain. In deference to the literature on decision analysis, we will represent the
system by a vector of random variables . Often, is referred to as “state of
nature”. As an example, consider the problem of determining the optimal flood pro-
tection for a city. Here, might represent the future maximum water height and
discharge of the river, as well as the future land use in the areas at risk.
The decision alternatives can be separated into decisions on actions and decisions
on gathering further information. The former, which we will denote by a, actively
change the state of the system as represented by . As an example, the decision on
building a dam upstream will change the probability of a flooding of the city or the
decision on allowing no building close to the river will alter the damage in the case
of a flood. On the other hand, decisions on gathering further information, denoted
by e, will not change the state of the system. Upon obtaining the information, our
estimate of the system state may change, however. If, for example, one decides to
perform an extended hydrological study, one will reduce the uncertainty on the es-
timate of the intensity of future flood events and obtain a more accurate estimate of
maximum floods. In the following we will focus on decisions on actions a; deci-
sions on collecting information e are considered in pre-posterior decision analysis
as introduced in Sect. 2.5.
68 D. Straub and I. Welpe
Finally, we must identify the attributes of the system upon which to assess the
optimality of a decision alternative. In the decision on flood protection, these at-
tributes include safety, monetary cost of measures and damages as well as societal
and environmental consequences. For optimization purposes, we translate these at-
tributes into a unique metric that allows comparing the alternatives in a quantitative
manner. This metric is termed utility u and the associated utility theory, outlined in
Sect. 2.3, forms the basis of normative decision analysis
everybody would prefer more over less money. However, this is not a necessary
condition for the theory; in principle, the utility function can have any arbitrary
shape.
Second, we note that the utility is not linear with money over the entire domain.
The increase in utility associated with a small increase in wealth, i.e. du(w)/dw, is
called marginal utility. Most decision makers have a marginal utility that decreases
with increasing wealth w. (In economics, this is sometimes referred to as the law of
diminishing marginal utility.) In simple words: obtaining two million Euros is not
simply two times more preferable than obtaining one million Euros.
To understand how the exact form of the utility function is derived, we con-
sider the basic principle of utility theory developed by Von Neumann and Morgen-
stern [49]. This principle is that:2
Utility is assigned to the attributes in such a way that a decision (on which action to take)
is preferred over another if, and only if, the expected utility of the former is larger than the
expected utility of the latter.
That is, the utility function is derived to ensure that among different set of deci-
sion alternatives, the preferable one will always result in the higher expected utility,
E[U ]. Expectation is a mathematical operation, which for the case that the utility
depends only on the single random variable θ , is defined as
∞
E[U ] = u(θ )f (θ )dθ or E[U ] = u(θ )p(θ ) (1)
−∞ all θ
where u(θ ) is the utility as a function of the system state θ and f (θ ) is the probabil-
ity density function (PDF) of if it is continuous and p(θ ) is the probability mass
function (PMF) of if it is discrete.
A common way of determining the utility function u(θ ) for monetary values is
to consider a series of decisions on whether or not to accept a bet. In each bet, there
2 For this to hold, a number of consistency requirements must be fulfilled, i.e. the preferences of
the decision maker must fulfill a set of axioms, which, however, are in agreement with what is
commonly considered to be consistent behaviour. As an example, one of the axioms states that the
ordering of the preferences among different outcome events Ei is transitive. Formally, if means
“preferred to” then transitivity demands that if Ej Ek and Ek El then it must also be Ej El .
For a more formal introduction and the full set of necessary axioms, consult e.g. (Luce and Raiffa
[5], Sect. 2.5).
70 D. Straub and I. Welpe
2.3.1 Probability
Decision making based on the expected utility theory requires one to assess the
probability of all relevant system outcomes. In practice, these probabilities must
often be estimated by the decision maker on the basis of limited or no data. The
probabilities represent the knowledge of the decision maker at the time of making
the decision, and are therefore subjective values. The problem of assessing these
probabilities in real situation is further addressed in Sect. 3.1 and in Chap. 12, [42].
2.3.2 Risk
In the context of utility theory and normative decision analysis, we will use the
following definition of risk:
Risk is the expected change in utility associated with uncertain, undesirable outcomes.
Following utility theory, decisions are not made based on risk, but on the basis
of the expected utility (of which risk is a part). The optimal decision is the one that
leads to the highest expected utility. It follows that the risk that should optimally be
taken is the risk associated with this decision.
2.3.3 Risk-Aversion
Utility functions are often concave, like the one of Fig. 1, corresponding to dimin-
ishing marginal utility. When considering losses, this can be explained by the fact
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 71
that substantial losses can have consequences that go beyond the direct losses, and
which therefore cannot be compensated by gains elsewhere. As an example, for a
company the loss of 10,000€ is likely to be twice as bad as the loss of 5,000€, but
the loss of 2 Million € can be disproportionally worse than the loss of 1 Million €
if such a loss threatens the liquidity of the company.
Typically, the utility function is linear (or almost linear) within a range that is
small compared to the working capital of the decision maker. This “size effect” is
illustrated in Fig. 2, showing the difference in the utility function of a small ver-
sus a large company. In the considered range, the utility function is linear for the
large company (these sums are “peanuts” for the insurance company), whereas it is
concave for the small company where the loss of one million is a critical event.
A consequence of the concave shape of the utility function is that decision
makers tend to avoid risks. Consider an event A, causing a loss of 105 €, and
an event B, with associated loss 106 €. Assume that the probabilities of these
events are pA = 0.1 and pB = 0.01. The expected monetary loss of both events is
p · Loss = −104 €. Assume that the decision maker is the engineering consultancy
whose utility function is shown in Fig. 2. The utility associated with the losses are
u(−105 €) = −0.09 and u(−106 €) = −2.3, respectively. The expected utility as-
sociated with events A and B (the risks) are E[UA ] = 0.1 · (−0.09) = −0.009 and
E[UB ] = 0.01 · (−2.3) = −0.023. Therefore, although the expected monetary loss
is the same, the risks associated with event B are higher. This effect is commonly
referred to as risk aversion.
where u (w) = du(w)/dw is the first derivative and u (w) = d2 u(w)/dw 2 the sec-
ond derivative of the utility function with respect to wealth w. Figure 3 shows sev-
eral utility functions with varying ARA. These are of the form
u(w) = 1 − exp(−cw). (3)
This utility function results in an ARA(w) = c that is constant for all values of w
(you can verify this claim by inserting the utility function in Eq. (2)). For a negative
ARA, the decision maker is said to be risk seeking. This corresponds to a convex
utility function, as exemplified in Fig. 3 by the utility function with ARA = −1.
Alternative measures of risk aversion exist, e.g. the Arrow-Pratt coefficient of
relative risk aversion (RRA):
u (w)
RRA(w) = −w . (4)
u (w)
There is a vast body of literature available investigating these and other measures
of risk aversion (e.g. Menezes and Hanson [30]; Binswanger [11]), most of which
is rather technical. It is, however, important to realize that the utility function is an
empirical function and there is no mathematical form of the utility function that
is justified by some “universal law”. In fact, Rabin [33] shows that already rela-
tively weak assumptions on the form of the utility function, namely the assumption
of diminishing marginal utility for all levels of wealth w, can lead to absurd pre-
dictions when extrapolating from decisions involving small sums to decisions with
large consequences. The reason behind this is that people do not generally behave
consistently according to the expected utility theory, as discussed later in Sect. 3.
This observation does not invalidate the use of expected utility theory, but it points
to the fact that extrapolation of the utility function assuming some underlying math-
ematical form (like the one of Eq. (3)) should not be performed. If this is taken into
consideration, then utility theory (and the measures of risk aversion) provides rules
for optimizing decisions under uncertainty and risk.
74 D. Straub and I. Welpe
Many decisions involve events with consequences that are small compared to the
“working capital” of the decision maker. This is particularly true if the decision
maker is society or a representative of society, e.g. a governmental body such as
the federal transportation administration. In this case, the utility function will be
linear with respect to monetary values. As we have seen earlier, the ordering of the
expected utility of different decision alternatives is not altered by a linear transfor-
mation of the utility function; we can thus set the utility function equal to mone-
tary values when all consequences are in the linear range of the utility function. In
this case, the decision problem can be reduced to an economic cost-benefit analysis
(Chap. 11, [36]).
Because monetary values are commonly used in society and economics for ex-
changing and comparing the value of different goods and units, decisions are often
assessed based on expected monetary values. However, it is important to be aware
that such an approach is only valid under the conditions stated above (i.e., a linear
utility function in the relevant range of consequences). For example, if the engineer-
ing consultancy in the example above would make its decision based on expected
monetary values, it would decide not to buy the insurance, which would not be op-
timal according to the company’s preferences expressed by the non-linear utility
function.
So far we have seen utility functions of a single attribute (wealth), yet in most real-
life problems involving risks, consequences are associated with several attributes
(e.g. economical cost and safety). When multiple attributes are relevant, it becomes
necessary to define joint utility functions of the different attributes. Multi-attribute
utility theory (MAUT) as presented in Keeney and Raiffa [3] is concerned with
decision problems involving multiple attributes.
As an example, consider a decision problem with two attributes X1 and X2 .
A possible joint utility function is constructed from the marginal utility functions
u1 (X1 ) and u2 (X2 ) by
u(X1 , X2 ) = c1 u1 (X1 ) + c2 u2 (X2 ) + c12 u1 (X1 )u2 (X2 ). (5)
In this case, the two attributes X1 and X2 are said to be utility independent. Often,
it is c12 = 0 and the joint utility function reduces to
u(X1 , X2 ) = c1 u1 (X1 ) + c2 u2 (X2 ). (6)
In this case, the two attributes X1 and X2 are said to be additive utility independent.
Once the joint utility function u is established, decision analysis proceeds as in
the case of the single attribute: the optimal decision is identified as the one that leads
to the highest value of the expected utility.
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 75
We do not go further into the details of MAUT, but we note that whenever mul-
tiple attributes are present (and they are so in most decision problems), a joint
utility function is necessary to make consistent decisions. It is important to be
aware of this, because it is sometimes argued that it is unethical to assess at-
tributes such as the health of humans or ecological values by the same metric as
monetary values (in particular if that metric happens to be the monetary value it-
self). These arguments are generally misleading, however. In the end, a decision
is made, which always implies a trade-off between individual attributes. If two de-
signs for a new roadway are possible, one with lower costs and one with lower
environmental impacts, then the final decision made will imply a preference that
weights these two attributes, if only implicitly. In fact, it is possible to deduce an
implicit trade-off from past decisions. Viscusi and Aldy [48] present an overview
on research aimed at estimating the “value of a statistical life” based on soci-
etal decisions and choices, and Lentz [28] demonstrates how such deduced trade-
offs can be used to assess the acceptability of engineering decisions. The prob-
lem with not making these trade-offs explicit is the possibility for making de-
cisions that reflect an inconstant assessment of society’s preferences and which
lead to an inefficient use of resources. An example of such inconsistent decision
making is given by Tengs [44], who compares 185 potential life-saving measures
that are or could be implemented in the United States. She finds that with cur-
rent policies, around 600,000 life years are saved by these measures at a cost of
21 Billion US$ (the numbers are valid for the 1990s). By optimizing the imple-
mented measures using cost-effectiveness criteria, she concludes that with the same
amount around 1,200,000 life years could be saved. It follows that the inefficient
use of resources here leads to a loss of around 600,000 life years (corresponding to
around 15,000 pre-mature deaths each year that could be avoided at no additional
cost).3
The above argument does not discard the benefits of communicating the values
of individual attributes for different decision alternatives. In particular for important
and complex decisions it is strongly advocated that decision makers and stakehold-
ers should be given the information on the effect of their decisions on all the relevant
attributes.
3 We note that, in principle, such a cost-effectiveness analysis does not require us to assign our
preferences, i.e. it is not necessary to make the trade-off between money and safety explicit. The-
oretically it would be sufficient to list the measures according to their effectiveness, as done by
Tengs [44], and then starting from the top of the list select all measures that are affordable. In
practice, however, such an approach is not possible, because these measures are implemented by
different governmental agencies and other actors, who do not make a joint planning. By assigning
an explicit trade-off between safety and cost (i.e. by putting a monetary value to human life), how-
ever, it can be ensured that money is spent optimally even without performing a joint optimization.
Each decision can be tested individually against the criteria set by decision analysis, based on the
joint utility function of life-savings and money (see also Lentz [28]).
76 D. Straub and I. Welpe
Utility theory prescribes that the optimal set of decisions is the one maximizing the
expected utility. Therefore, normative decision analysis essentially corresponds to
computing the expected utility for a given set of decisions a, E[u(a, ) | a], and
then solving the optimization problem:
aopt = arg max E u(a, ) | a . (7)
a
The operator arg maxa reads: the value of the argument that maximizes the expres-
sion on the right hand side. The expectation E[ ] is with respect to the random vari-
ables describing the uncertain system state = [1 ; . . . ; n ]. It is defined as
E u(a, ) | a = ··· u(a, θ )f (θ )dθ1 · · · dθn . (8)
1 n
This is a generalization of Eq. (1) to the case of multiple random variables. Equa-
tion (8) applies to the case where all uncertain quantities = [1 ; . . . ; n ] are
described by random variables with joint probability density function f (θ ). If all
or some of the random variables are discrete, the corresponding integration opera-
tions in Eq. (8) must be replaced with summation operations.
To represent and model the decisions a and their effect on (expected) utility, deci-
sion trees and influence diagrams have emerged as useful tools. The presentation in
this section is limited to decision problems with given information, i.e. for problems
in which all uncertain quantities are described by known probability distributions
and it is not possible to gather further information. The possibility to collect further
information will be introduced in Sect. 2.6.
In a decision tree, all decisions a as well as random vectors describing the states
of the system are modeled sequentially from left to right. Each decision alternative
is shown as a branch in the tree, as is each possible outcome of the random vari-
ables. A generic decision tree is shown in Fig. 4, with only one random variable
with m outcome states θ1 , . . . , θm . The tree is characterized by the different decision
alternatives a, the system outcomes described by a probability distribution con-
ditional on a, and the utility u as a function of a and . The decision alternatives as
well as the system outcomes can be defined either in a discrete space, a continuous
space or a combination thereof.
The analysis proceeds from left to right: for each decision alternative ai , the
expected value of the utility is computed following Eq. (8) and the optimal decision
is found according to Eq. (7).
Illustration 2 (Pile Selection) This example, which involves only discrete random
variables and decision alternatives, is due to Benjamin and Cornell [10]. A construc-
tion engineer has to select the length of steel piles at a site where the depth to the
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 77
bed-rock is uncertain. The engineer has the choice between 15 m and 20 m piles
and the possible states of nature are a 15 m or 20 m depth to the bedrock. The con-
sequences (utility) associated with each combination of decision and system state is
summarized in Table 1.
The probabilities of the different outcomes are p(θ1 ) = 0.7 and p(θ2 ) = 0.3.
The full decision tree for this problem is shown in Fig. 5. The expected utilities for
decisions a1 and a2 are obtained as E[U | a1 ] = 0.7 · 0 + 0.3 · (−400) = −120 and
E[U | a2 ] = 0.7 · (−100) + 0.3 · 0 = −70. Obviously, the optimal decision is to order
the larger piles.
The decision tree grows exponentially with the number of decisions and ran-
dom variables considered, due to the necessary ordering of decisions and random
variables (each decision must be made conditional on the decisions and random
variables to its left, and each random variable is described by a probability distribu-
tion conditional on the decisions and random variables to its left). The decision tree
is thus not convenient for representing decision problems involving more than just
a few parameters. A more efficient and flexible alternative are influence diagrams,
introduced in the following section.
78 D. Straub and I. Welpe
the assumptions regarding independence among variables. Examples for the con-
struction of such models are given e.g. in Jensen and Nielsen [2], Straub [6]. Free
software that allows the construction and computation of influence diagrams (and
Bayesian networks) is available, e.g. the Genie/Smile code that can be downloaded
from http://genie.sis.pitt.edu/.
Previously, we have assumed that all information is available at the time of making
the decision and that it is not possible to obtain additional information on the uncer-
tain state of nature . However, in most cases when decisions must be made under
conditions of uncertainty, it is possible to gather additional information to reduce the
uncertainty prior to making the decisions a. As an example, in the decision on flood
protection, it might be possible to perform additional detailed studies to reduce the
uncertainty in estimating damages for given levels of flood. The question that must
be answered is: is it efficient to collect additional information before deciding a? Or
in other words: is the value of the information higher than the cost of obtaining it?
Preposterior decision analysis aims at optimizing decisions on gathering addi-
tional information e, together with decisions on actions a (the letter e is derived
from the word experiment). Typical applications of preposterior decision analysis
are:
– Optimization of monitoring systems and inspection schedules
– Decision on the appropriate level of detailing in an engineering model
– Development of quality control procedures
– Design of experiments
It is important to realize that collecting and analyzing information does not alter
the system. (Exceptions are destructive tests, which sometimes worsen the state of
the system.) For this reason, decisions on gathering information e do not directly
lead to a change in the risk, unlike decisions on actions a. The benefit of e is the
reduction in uncertainty on the system state , which in turn facilitates the selection
of optimal actions a. Preposterior decision analysis allows quantifying this benefit,
the so-called value of information. (The word preposterior derives from the fact
that we calculate in advance (pre-) the effect of information on the model, i.e. the
updating of the prior model with the information to the posterior model.)
80 D. Straub and I. Welpe
where the utility is now a function of the selected experiments e, the outcome of the
experiments Z, the state of the system and the final actions a, u(e, z, a, θ ), and the
expectation is with respect to the system state and the experiment outcomes Z.
Details on how to compute the above expectations, as well as on modeling the
information, can be found in the literature, in particular in the classical reference of
Raiffa and Schlaifer [35] and in Straub [43]. Here, we restrict ourselves to presenting
the computations by means of an illustrative example in the following.
The influence diagram can be implemented in software, since all the relevant
information is provided earlier in the text. For this small example, calculations can
also be performed manually, as illustrated in Straub [6]. The decision not to inspect
leads to an expected utility of −70, as was calculated earlier. The decision to inspect
leads to an expected utility of −60, and is therefore optimal. The reason for this
higher utility is that the test might indicate a lower depth and the smaller pile can be
chosen in this case. Even though this indication is not completely reliable (there is a
probability Pr( = θ2 | Z = z1 ) = 0.07 that the depth is 20 m despite an indication
of 15 m), it is sufficiently accurate to provide a higher expected utility.
The value of information of the test can be computed by comparing the expected
utility with and without the test and subtracting the cost of the test itself. For the
considered sonic test, the value of information is −60 − (−70) − (−20) = 30.
It sheds a light on the dispute between the different branches of decision theory that
the decision theorist in question, Howard Raiffa, never actually said this, but on the
contrary did decide to move to Harvard using a formal decision analysis to guide his
decision, as he recalls in [34].
Broadly, the limitations of normative decision theory can be divided into the
following two categories:
of entrepreneurs or politicians, as they are typically not faced with decision situ-
ations in which all different outcomes along with their probabilities are known in
advance. In many situations, decision-makers (regardless of which decision the-
ory is used) are unable to rigorously determine probabilities and outcome values
of all risk-related events in advance (Sect. 2.3.1). Risk managers, entrepreneurs,
decision-makers typically encounter situations that are not entirely mathematically
resolvable, unlike when betting on a number in the Roulette game, where the prob-
abilities of winning and losing as well as the potential pay offs are known in ad-
vance to all players (i.e. decision-makers). This is rarely the case in complex socio-
technical risk problems. This might call into question the usefulness of economic
risk experiments that use gambles to understand risk decision making (Stanton and
Welpe [41]).
Whenever accurate predictions are necessary (e.g. when important issues are at
stake) but impossible, it is advisable better to realize and accept these limitations
instead of falsely relying on alleged and delusive certainty. For some problems, the
issues can be addressed by making a decision analysis and forecast based on the
best available estimates followed by sensitivity analyses. For all problems that are
not sufficiently well understood and the interrelations of the parameters are not well
known, in particular with social and economic systems that are inherently complex,
self-emergent and variable, it is often impossible to accurately predict the future of
such systems. It is advisable to employ several alternative approaches for risk as-
sessment and risk decisions in order to harvest the strengths of multiple approaches
and compensate for their respective limitations.
The assumptions of normative decision theory closely resemble and are based on
the well-known (some people think: infamous) “Homo Oeconomicus”. Homo Oe-
conomicus is an artificial model of human perception and decision-making, who
is self-oriented, has preferences that are stable over time and is able to process in-
formation fully and rationally. Following Kirchgässner [26], “Homo Oeconomicus”
lives in an unrealistic world in which all information including probabilities and
outcome values of all choice options are known and freely available without any
transaction costs, which also include the time and energy necessary to search, eval-
uate, contract, and control information and information providers (e.g. Kirchgässner
[26]). The model of “Homo Oeconomicus” makes a number of additional assump-
tions among which are optimality, universality and omniscience (Kurz-Milcke and
Gigerenzer [27]). Here, optimality means that individuals strive for the best possible
solution instead of a solution which is good-enough. Omniscience implies that in-
dividuals have complete information about positive and negative consequences of a
decision. Kurz-Milcke and Gigerenzer [27] further argue: (1) that universality is an
expression of the idea that a common currency or calculus exists which underlies all
84 D. Straub and I. Welpe
decisions, (2) that normative decision theory assumes that humans are always both
willing and cognitively capable of identifying the optimal decision, which would be
one that maximizes according to a certain criterion (e.g. money, happiness), (3) that
individuals (as well as organizations) are fully aware of all existing decision possi-
bilities and their associated costs, benefits and probabilities in the present and future.
Of course, these assumptions are a “mathematical idealization” of reality, and are
not adequate to completely describe the current evaluations, decisions and behaviors
of people, let alone predict their future utilities and actions. The question to ask is
whether a completely accurate description is necessary, use- or helpful for any given
risk management problem.
Previous research has repeatedly shown that the formal conceptualization of ra-
tional decision-makers and the empirically observed human behavior differ sub-
stantially (e.g. Tversky and Kahneman [47]; Kahneman and Tversky [23]). Since
1970, Akerlof [8] has argued that information is typically unevenly shared between
any two transaction partners, resulting in ubiquitous “information asymmetry” as
the rule not the exception. Having full information during a decision process is in
reality impossible. Furthermore, transaction costs exist in virtually all transactions
(Coase [13]). Even if such a world ever existed in which all information is known
and freely available, Simon [38] was one of the first scholars to point out that the
limited cognitive ability of individuals limits the identification of any best option
from several alternatives. People are simply unable to process and evaluate every
alternative in an acceptable time frame.
Ford et al. [16] review 45 studies that investigate the outcomes of decision-
making and shows that humans often use heuristics instead of weighing pros and
cons as normative decision theory would predict. They conclude with the statement
that “the results conclusively demonstrate that non-compensatory4 strategies were
the dominant mode used by decision makers. Compensatory strategies (i.e. trad-
ing off good and bad aspects of two competing alternatives—parentheses added by
Straub and Welpe) were typically used only when the number of alternatives and
dimensions were small or after a number of alternatives have been eliminated from
consideration”.
The following section introduces two theories in decision making that address
the limitations of the classical theory for descriptive decision analysis, namely,
(a) prospect theory that emphasizes the limitations, cognitive and affective biases
of human decision making and (b) the approach of ecological rationality that em-
phasizes the human ability to make correct decisions under limited time and infor-
mation through the use of heuristics and “gut feeling”. The goal of this section is
Prospect theory was introduced by Kahneman and Tversky [23]. They were awarded
the Nobel Prize in Economic Sciences in 2002 “for having integrated insights from
psychological research into economic science, especially concerning human judg-
ment and decision-making under uncertainty” (Royal Swedish Academy of Sci-
ences [46, p. 1]). Their work integrates normative decision theory with insights
from behavioral sciences and cognitive psychology. Furthermore, they introduced
experiments as an innovative methodology for economics in their research. These
developments have laid the foundation for a new field of research called behavioral
economics, which has been the starting point of a paradigmatic shift in the study
of human decision-making under risk. In contrast to expected utility theory, which
is considered to be a prescriptive and normative theory, prospect theory is a de-
scriptive theory of human behavior in decision making under risk constituting an
extension of the normative expected utility theory.
One of the main contributions of prospect theory is in its explicit consideration
and inclusion of the observer-dependent perceptions of utility and in the subjective
weighting of outcome probabilities. An important aspect economists have previ-
ously overlooked (some continue to overlook it) is that human preferences with
regard to seemingly “objective facts” are highly context-dependent and can conse-
quently show a great deal of inter-individual differences. To illustrate this further:
a glass of water can be worth a few pennies if you are sitting at home and are not
thirsty and it can be worth a million dollars if you are alone in the desert, close to
dying of thirst. This seemingly trivial example illustrates a central point. Standard
economic theory uses normative decision theory, which has not found a way yet to
incorporate how individuals perceive, evaluate, weigh and judge objective proba-
bilities, risks, outcomes, costs and benefits depending on the context and their sub-
jective mental states. Even though these observations and deliberations are hardly
surprising to social scientists, especially psychologists, and probably also to the av-
erage lay person, they had a great impact on economists and economic theory, for
reasons outlines in Sects. 3.1 and 3.2.
Kahneman, Tversky and colleagues empirically investigate the value function of
individuals, in which the loss curve has a steeper decline than the gain curve based
on the person’s respective reference point or status quo. A main finding of prospect
theory (e.g. [23]) shows that people react more sensitively to any losses (i.e. changes
below their individual status quo on the value function) than to gains (i.e. changes
above the individual status quo), even when the resulting value of the outcome is
the same (so that normative decision theory predicts the same utility).
86 D. Straub and I. Welpe
Furthermore, this line of research has consistently shown that the subjective per-
ceptions of objectively equal risk alternatives can vary because of different wording
and phrasing of the decision alternatives. The most basic example of this kind would
be to describe a glass, which only contains half of its content as 50 % empty ver-
sus 50 % filled. Scholars (e.g. Tversky and Kahneman [7]; Levin and Chapman
[29]) have repeatedly demonstrated in numerous experiments that individuals’ pref-
erences change simply due to a different wording, the so called “framing” alone.
Kahneman, Knetsch, and Thaler [24] argue that loss aversion described in
prospect theory influences decision processes in that humans are generally more
negative about potential losses (risks) than they are positive about possible gains
(opportunities). Related to prospect theory, Kahneman et al. [24] have identified a
number of additional cognitive biases and so-called irrational “anomalies” with re-
gard to human decision-making. For instance, the status quo bias or the endowment
bias (Samuelson and Zeckhauser [37], Kahneman et al. [24]).
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 87
The endowment bias effect is closely associated with loss aversion (Thaler [45])
and is salient when a loss of any asset weighs much higher in the decision-making
than a win of an asset with the same size and value would. The decisive aspect
with the endowment effect stems from the ownership of an object. Research on
the endowment effect shows that assets are valued more highly when they are in
the possession of the decision-maker than when they are not. Again, this finding
confirms that subjective perceptions of seemingly objective characteristics are more
important when describing and predicting human decision-making. In a similar way,
the status quo bias describes the tendency of individuals to prefer the status quo
over taking chances and risks in decision making (Samuelson and Zeckhauser [37],
Fernandez and Rodrik [14]).
According to status quo bias theory, consumer choices depend on which option
is framed as the default (i.e. status quo) option. Kahneman et al. [24] have sug-
gested that the status quo bias is the result of a combination of loss aversion and
the endowment effect. For politicians, management executives and anyone manag-
ing risk-related challenges, the status quo bias means that thinking about what will
constitute the “default” in the organization or decision processes will greatly influ-
ence which decisions will be taken. An example for a risk-related default would be
an organizational rule such as “safety first—when in doubt do what is best for the
safety of our products and not what is best from an economic perceptive”.
The previous sections have dealt with the abilities and inabilities of humans to opti-
mize decisions and make full use of all information available. More often than not,
individuals have to make decisions under limited time and information, which rules
out the application of any analytic decision making procedure to determine an “op-
timal” decision. How do people decide in situations like this? To illustrate this, we
first consider an example.
Gigerenzer [20] gives an example that mirrors the different theories and ap-
proaches of decision making humans can use: the problem of catching a ball fly-
ing in the air in baseball. One could approach this problem by calculating all the
probabilities and utilities or one could use a simple heuristic to catch the ball. It is
impossible for humans to know all necessary parameters of the flight of the ball to
correctly calculate the “parabolic trajectories”, i.e. the “ball’s initial distance, ve-
locity, wind strength and projection angle” necessary to catch the ball. All of these
parameters would need to be assessed and calculated in the short time while the ball
is in the air. As the calculation of these parameters is impossible, Gigerenzer [20]
suggests, the use of so-called “heuristics”, in this case the gaze heuristics to accom-
plish the task of catching the ball. The gaze heuristic works in the following way:
a player fixates the ball and starts running and adjusts his or her speed of running in
an extent that allows him or her to keep the angle of his or her gaze constant. The
player will probably be unable to know or “calculate” where exactly the ball will
88 D. Straub and I. Welpe
touch the ground, but more importantly, keeping the angle between his or her eyes
and the ball constant the player will be at the spot where the ball lands. The gaze
heuristic is a well-known example of a fast and frugal heuristic. It is called fast, be-
cause the heuristic can address problems within matters of seconds, and it is called
frugal because it requires little information to work accurately.
Descriptive (and behavioral) decision theory generally agrees that the human in-
formation processing capacity is limited, for example through cognitive and affec-
tive biases, which make human decision making in general—including heuristic
decision making—sub-optimal. In contract, the heuristics approach as pointed out
by Gigerenzer and colleagues takes an evolutionary perspective—and argues that
such “fast and frugal heuristics” have emerged as a result of human evolution in
order to facilitate good decision-making under limited information and time.
Gigerenzer [18, 19] and colleagues are also critical of behavioral economics for a
number of points. First, with regard to biases (see Sect. 3.3) they argue that these are
“first-best solutions” and “environmental adjustments” of human decision making
resulting from long evolutionary processes. In contrast to behavioral economists, he
does not categorize heuristic decision making or so called “irrationalities” in deci-
sion making in any negative way as “errors” or “second best solutions”. They argue
that calculating probabilities is much more difficult to accomplish for humans than
understanding frequencies (Gigerenzer [18]). Their basic argument is that bounded
rationality as introduced by Herbert Simon and what he calls effective “ecological
rationality” (i.e. heuristic decision making) do not contradict each other and in fact
often co-exist together closely (Gigerenzer and Goldstein [1], Gigerenzer [20]). The
original thinking behind this idea is that heuristic decision making, i.e. decision-
making that is not based on an exact number or their calculations, is more efficient
than decision making based on classic utility maximization. In other words, heuris-
tics are particularly efficient in situations with limited information and time for de-
cision making were mathematical optimization is impossible, which is regularly the
case for decisions in managerial or political (and also personal) decisions. Heuris-
tics, nevertheless, need to constantly be adapted to fit the contexts in which they are
applied in as no heuristic is effective or useful in all decision situations.
In the following, we present examples for heuristics, namely the representative-
ness heuristic, the availability heuristic and the affective heuristic.
The representativeness heuristics refers to judgments of probabilities of a future
event or the representativeness of a sample. In other words, it describes individuals’
subjective assessment of probabilities based on the comparison of previous experi-
ences with events or individuals that represent a current event or sample. Particularly
important is the subjectively perceived similarity, which can lead to misjudgments
because the more individuals perceive events to be similar the more they are likely
to ignore important information and previous probabilities about a current situation
or sample.
Another important heuristic is the availability heuristic, which refers to the eval-
uation of the probability of events based on one’s own previous experiences and
memories, which can be easily recalled. The more easily they are recalled, the higher
individuals evaluate the likelihood of similar current events (Kahneman and Tversky
[22]).
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 89
4 Discussion
All models are wrong, but some are useful.5
This chapter has outlined a number of different decision theories, all of which have
their merits and their limitations. The choice of the theoretical approach must thus
be problem dependent, as emphasized throughout the text. Table 3 summarizes the
three main decision theories presented in this chapter.
The classical decision theory is relatively far refined and current research in this
area focuses mostly on computational aspects of the optimization problem in vari-
ous fields of application. There are, however, some alternative novel developments,
which address the difficulty in realistically assessing probabilities in real decision
situations. One example is the info-gap theory, which is developed to provide ro-
bust decisions on a non-probabilistic basis (Ben-Haim [9]). The descriptive and the
heuristic theories, due to their empirical nature and shorter history, seem wide open
for development and adaptation. In addition, there is ample potential for research
on the application of both lines of decision theory to practical problems involving
risk. Real decisions (be it in business, technology, politics or other fields) are sel-
dom based on rigorous applications of decision theory, be it normative, descriptive
or heuristic. One reason for this lack is the gap between researchers living in an
“idealized world” and the practitioners dealing with the “dirty reality”.
Concerning the different lines of decision theory, researchers should aim to link
the formalism of classical utility analysis with the empirical appropriateness of
descriptive and behavioral models. In order to understand and improve decision
making on systemic and complex risks, an integrative perspective of normative, de-
scriptive and heuristic decision making may offer many benefits. Another promising
area for future research would be to study the normative and behavioral perspectives
looking at group decisions as opposed to individual decisions. Furthermore, scholars
may want to examine which institutions (rules, regulations, etc.) can be successfully
implemented in order to enhance the effectiveness and efficiency of individual and
group decisions (e.g. debiasing strategies).
• What is the value of economics and classical utility theory given that they make
a number of often unrealistic assumptions? Where can they and where can they
not create value added by applying them?
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 91
• It has been said, that all models are wrong to some degree—is there a point how-
ever, where a model becomes “too wrong” or “right enough”—if so, how would
one know?
• How can economic theory account for the role of subjective perception of “ob-
jective” values and probabilities in human decision making?
• What is the value of information and how can it be assessed?
• How does one (theoretically) construct a utility function for a decision maker,
following the classical utility theory?
• From two engineering designs for a tunnel construction, which only differ in
safety and cost, one is selected. How can the implicit trade-off between safety
and risk be deduced from this solution?
• It has been argued that by not following the expected utility principle when mak-
ing decisions involving life safety, “we are in effect killing people”. Discuss this
statement.
• A popular “economics joke”: what do economists mean when they write in the
conclusion of their paper: “The evidence for our hypotheses is mixed?” It means
that economic theory supports the hypotheses but the empirical data does not.
Discuss.
6 Summary
Classical normative decision analysis, which is based on the expected utility the-
ory developed by mathematicians, provides an axiomatic framework for optimiz-
ing decisions under uncertainties. It is well suited for identifying optimal decisions
when copying with risks if probabilities and consequences of adverse events can be
reasonably well quantified. Descriptive decision analysis is a generalization of the
expected utility theory, accounting for the influence of psychological factors on the
decisions made. It is better suited than the classical theory to describe the behavior
of humans under uncertainty and risk. Finally, the chapter outlines newer attempts
to formalize heuristic decision making, which is based on relatively simple rules,
and which assume that these heuristics have developed in an evolutionary process.
These theories are particularly well suited to describe (and sometimes optimize)
decision making under uncertainty and limited time and information.
References
Selected Bibliography
1. G. Gigerenzer, D.G. Goldstein, Reasoning the fast and frugal way: models of bounded ratio-
nality. Psychol. Rev. 103, 650–669 (1996)
2. F.V. Jensen, T.D. Nielsen, Bayesian Networks and Decision Graphs. Information Science and
Statistics (Springer, New York, 2007)
92 D. Straub and I. Welpe
3. R.L. Keeney, H. Raiffa, Decisions with Multiple Objectives (Wiley, New York, 1976).
Reprinted by Cambridge University Press, 1993
4. G.F. Loewenstein, E.U. Weber, C.K. Hsee, N. Welch, Risk as feelings. Psychol. Bull. 127,
267–286 (2001)
5. R.D. Luce, H. Raiffa, Games and Decisions: Introduction and Critical Survey (Wiley, New
York, 1957)
6. D. Straub, Lecture notes in engineering risk analysis. TU München (2011)
7. A. Tversky, D. Kahneman, The framing of decisions and the psychology of choice. Science
211, 453–458 (1981)
8. G. Akerlof, The market for ‘lemons’: quality uncertainty and the market mechanism. Q. J.
Econ. 84, 488–500 (1970)
9. Y. Ben-Haim, Info-Gap Decision Theory: Decisions Under Severe Uncertainty (Academic
Press, San Diego, 2006)
10. J.R. Benjamin, C.A. Cornell, Probability, Statistics, and Decision for Civil Engineers
(McGraw-Hill, New York, 1970)
11. H.P. Binswanger, Attitudes toward risk: experimental measurement in rural India. Am. J.
Agric. Econ. 62, 395–407 (1980)
12. G.E. Box, N.R. Draper, Empirical Model-Building and Response Surfaces. Wiley Series in
Probability and Statistics (1987)
13. R. Coase, The nature of the firm. Economica 4, 386–405 (1937)
14. R. Fernandez, D. Rodrik, Resistance to reform: status quo bias in the presence of individual-
specific uncertainty. Am. Econ. Rev. 81, 1146–1155 (1991)
15. M. Finucane, A. Alhakami, P. Slovic, S.M. Johnson, The affect heuristic in judgments of risks
and benefits. J. Behav. Decis. Mak. 13, 1–17 (2000)
16. J.K. Ford, N. Schmitt, S.L. Schechtman, B.M. Hults, M.L. Doherty, Process tracing methods:
contributions, problems, and neglected research questions. Org. Behav. Hum. Decis. 43, 75–
117 (1989)
17. D.C. Gause, G.M. Weinberg, Exploring Requirements: Quality Before Design (Dorset House,
New York, 1989)
18. G. Gigerenzer, From tools to theories: a heuristic of discovery in cognitive psychology. Psy-
chol. Rev. 98, 254–267 (1991)
19. G. Gigerenzer, On narrow norms and vague heuristics: a reply to Kahneman and Tversky.
Psychol. Rev. 103, 592–596 (1996)
20. G. Gigerenzer, Fast and frugal heuristics: the tools of bounded rationality, in Blackwell Hand-
book of Judgment and Decision Making, ed. by D. Koehler, N. Harvey (Blackwell, Malden,
2006), pp. 62–88
21. R. Howard, J. Matheson, Influence diagrams, in The Principles and Applications of Decision
Analysis, Vol. II. (Strategic Decisions Group, Menlo Park, 1981). Published again in: Decis.
Anal. 2, 127–143 (2005)
22. D. Kahneman, A. Tversky, On the psychology of prediction. Psychol. Rev. 80, 237–251
(1973)
23. D. Kahneman, A. Tversky, Prospect theory: an analysis of decision under risk. Econometrica
47, 263–292 (1979)
24. D. Kahneman, J.L. Knetsch, R.H. Thaler, Anomalies: the endowment effect, loss aversion,
and status quo bias. J. Econ. Perspect. 5, 193–206 (1991)
25. G.A. Kiker et al., Application of multicriteria decision analysis in environmental decision
making. Integr. Environ. Assess. Manag. 1, 95–108 (2005)
3 Decision-Making Under Risk: A Normative and Behavioral Perspective 93
26. G. Kirchgässner, Homo Oeconomicus: The Economic Model of Behaviour and Its Applica-
tions in Economics and Other Social Sciences (Springer, Berlin, 2008)
27. E. Kurz-Milcke, G. Gigerenzer, Heuristic decision making. Mark. J. Res. Manag. 3, 48–56
(2007)
28. A. Lentz, Acceptability of civil engineering decisions involving human consequences. PhD
thesis, TU München, Germany (2007)
29. I.P. Levin, D.P. Chapman, Risk taking, frame of reference, and characterization of victim
groups in AIDS treatment decisions. J. Exp. Soc. Psychol. 26, 421–434 (1990)
30. C.F. Menezes, D.L. Hanson, On the theory of risk aversion. Int. Econ. Rev. 11, 481–487
(1970)
31. D.M. Messick, M.H. Bazerman, Ethical leadership and the psychology of decision making.
MIT Sloan Manag. Rev. 37, 9–22 (1996)
32. J.W. Pratt, Risk aversion in the small and in the large. Econometrica 32, 122–136 (1964)
33. M. Rabin, Risk aversion and expected-utility theory: a calibration theorem. Econometrica 68,
1281–1292 (2000)
34. H. Raiffa, Decision analysis: a personal account of how it got started and evolved. Oper. Res.
50, 179–185 (2002)
35. H. Raiffa, R. Schlaifer, Applied Statistical Decision Theory (Cambridge University Press,
Cambridge, 1961)
36. J. Roosen, Cost-benefit analysis, in Risk – A Multidisciplinary Introduction, ed. by C. Klüp-
pelberg, D. Straub, I. Welpe (2014)
37. W. Samuelson, R. Zeckhauser, Status quo bias in decision making. J. Risk Uncertain. 1, 7–59
(1988)
38. H. Simon (ed.), Models of Man: Social and Rational (Wiley, New York, 1957)
39. P. Slovic, E. Peters, Risk perception and affect. Curr. Dir. Psychol. Sci. 15, 322–325 (2006)
40. P. Slovic, M. Finucane, E. Peters, D.G. MacGregor, Risk as analysis and risk as feelings: some
thoughts about affect, reason, risk, and rationality. Risk Anal. 24, 1–12 (2004)
41. A.A. Stanton, I.M. Welpe, Risk and ambiguity: entrepreneurial research from the perspective
of economics, in Neuroeconomics and the Firm, ed. by A.A. Stanton, M. Day, I.M. Welpe
(Edward Elgar, Cheltenham, 2010), pp. 29–49
42. D. Straub, Engineering risk assessment, in Risk – A Multidisciplinary Introduction, ed. by
C. Klüppelberg, D. Straub, I. Welpe (2014)
43. D. Straub, Value of information analysis with structural reliability methods. Struct. Saf.
(2014). doi:10.1016/j.strusafe.2013.08.006
44. T.O. Tengs, Dying too soon: how cost-effectiveness analysis can save lifes. NCPA Policy
Report #204, National Center for Policy Analysis, Dallas (1997)
45. R. Thaler, Toward a positive theory of consumer choice. J. Econ. Behav. Organ. 1, 39–60
(1980)
46. The Royal Swedish Academy of Sciences. Press release, advanced information on the prize
in economic sciences 2002, 17 December 2002 (retrieved 28 August 2011). http://www.
nobelprize.org/nobel_prizes/economics/laureates/2002/ecoadv02.pdf
47. A. Tversky, D. Kahneman, Judgment under uncertainty: heuristics and biases. Science 185,
1124–1131 (1974)
48. W.K. Viscusi, J.E. Aldy, The value of a statistical life: a critical review of market estimates
throughout the world. J. Risk Uncertain. 27, 5–76 (2003)
49. J. von Neumann, O. Morgenstern, Theory of Games and Economical Behaviour (Princeton
University Press, Princeton, 1944)
50. I.M. Welpe, M. Spörrle, D. Grichnik, T. Michl, D. Audretsch, Emotions and opportunities:
the interplay of opportunity evaluation, fear, joy, and anger as antecedent of entrepreneurial
exploitation. Entrep. Theory Pract. 36, 1–28 (2012)