This paper investigates human decision-making under uncertainty. The authors discovered that people make predictable errors in judgment when dealing with complex decisions due to mental shortcuts called "heuristics" and "biases". Three key heuristics are identified: representativeness, which judges probability based on similarity; availability, which estimates frequency based on what can be recalled; and anchoring, which makes adjustments from an initial value. While heuristics aid swift thinking, they can also lead to biases. Understanding these cognitive tendencies could enhance decision-making under uncertainty.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
136 views3 pages
Judgment Under Uncertainty
This paper investigates human decision-making under uncertainty. The authors discovered that people make predictable errors in judgment when dealing with complex decisions due to mental shortcuts called "heuristics" and "biases". Three key heuristics are identified: representativeness, which judges probability based on similarity; availability, which estimates frequency based on what can be recalled; and anchoring, which makes adjustments from an initial value. While heuristics aid swift thinking, they can also lead to biases. Understanding these cognitive tendencies could enhance decision-making under uncertainty.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3
Judgment Under Uncertainty
Heuristics and Biases by Amos Tversky and Daniel Kahneman
Amos Tversky and Daniel Kahneman case Judgement Under Uncertainty:
Heuristics and Biases is a landmark in the history of psychology. This case has a particular impact on economics – where Tversky and Kahneman’s work helped shape the entirely new sub discipline of ‘behavioral economics. The interest in analyzing human decision behavior has considerably increased in recent years. Within fields such as technology assessment, the possibilities and limits of the ability of man to decide sometimes create sources of uncertainty that may even threaten their very existence. Decisions under uncertainty are traditionally defined by incomplete information or knowledge about a situation— that is, the possible alternatives or the probability of their occurrence or their outcomes are not known by the subjects. The sources of uncertainty are not homogeneous. Uncertainty may be internally or externally attributed. The paper investigates human decision-making, specifically what human brains tend to do when we are forced to deal with uncertainty or complexity. Based on experiments carried out with volunteers, Tversky and Kahneman discovered that humans make predictable errors of judgement when forced to deal with evidence or make challenging decisions. These errors stem from ‘heuristics’ and ‘biases’ mental shortcuts and assumptions that allow us to make swift, automatic decisions, often usefully and correctly, but occasionally to our detriment. This paper talks about an assessment of representativeness or similarity, which is usually performed when people are asked to judge the probability that an object or event A belongs to a class or process. An assessment of the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development. An adjustment from a starting point, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty. Representativeness Many of the questions related to probability with which people are concerned belong to one of the following types and they are: What is the probability that objects A can belong to class B? What is the probability that event A can originate from process B? What is the probability that process A might generate event B? While answering such questions people generally rely on the representativeness heuristic, in which probabilities are evaluated by the degree to which A is representative of B, and by the degree of similarity between them. For example, when A is highly representative of B, the probability that A originates from B is judged to be high. On the other hand, if A is not similar to B, the probability that A originates from B is judged to be below. For an illustration of judgment by representativeness, consider an individual who has been described by a former friend as follows: Shreeya is very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality. A meek and tidy soul, she has a need for order and structure, and a passion for detail. How do people assess the probability that Shreeya is engaged in a particular occupation from a list of possibilities (for example, farmer, salesman, airline pilot, librarian, or physician)? How do people order these occupations from most to least likely? In the representativeness heuristic, the probability that Shreeya is a librarian, for example, is assessed by the degree to which she is representative of, or similar to, the stereotype of a librarian. Indeed, research with problems of this type has shown that people order the occupations by probability and by similarity in exactly the same way. This approach to the judgment of probability leads to serious errors, because similarity, or representativeness, is not influenced by several factors that should affect judgments of probability. Summing up this paper described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; (ii) availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and (iii) adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.