There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
From Wikipedia:
The Allais paradox arises when comparing participants’ choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows:
Experiment 1
Experiment 2
Gamble 1A
Gamble 1B
Gamble 2A
Gamble 2B
Winnings
Chance
Winnings
Chance
Winnings
Chance
Winnings
Chance
$1 million
100%
$1 million
89%
Nothing
89%
Nothing
90%
Nothing
1%
$1 million
11%
$5 million
10%
$5 million
10%
Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone.
However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B.
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own “calculations”. 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the “irrational” set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn’t a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do—and that would explain Allais paradox as an effect of rational behavior).
Basically—Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI’s?
Rational to distrust your own rationality?
There are a number of experiments that throughout the years has shown that expected utility theory EUT fails to predict actual observed behavior of people in decision situations. Take for example Allais paradox. Whether or not an average human being can be considered a rational agent has been under debate for a long time, and critics of EUT points out the inconsistency between theory and observation and concludes that theory is flawed. I will begin from Allais paradox, but the aim of this discussion is actually to reach something much broader. Asking if it should be included in a chain of reasoning, the distrust in the ability to reason.
From Wikipedia:
I would say that there is a difference between E1 and E2 that EUT does not take into account. That is that in E1 understanding 1B is a more complex computational task than understanding 1A while in E2 understanding 2A and 2B is more equal. There could therefore exist a bunch of semi-rational people out there that have difficulties in understanding the details of 1B and therefore assign a certain level of uncertainty to their own “calculations”. 1A involves no calculations; they are sure to receive 1 000 000! This uncertainty then makes it rational to choose the alternative they are more comfortable in. Whereas in E2 the task is simpler, almost a no-brainer.
Now if by rational agents considering any information processing entity capable of making choices, human or AI etc and considering more complex cases, it would be reasonable to assume that this uncertainty grows with the complexity of the computational task. This then should at some point make it rational to make the “irrational” set of choices when weighing in the agents uncertainty in the its own ability to make calculated choices!
Usually decision models takes into account external factors of uncertainty and risk for dealing with rational choices, expected utility, risk aversion etc. My question is: Shouldn’t a rational agent also take into account an internal (introspective) analysis of its own reasoning when making choices? (Humans may well do—and that would explain Allais paradox as an effect of rational behavior).
Basically—Could decision models including these kinds of introspective analysis do better in: 1. Explaining human behavior, 2. Creating AI’s?