1) Rationality as truth-seeking. 2) Rationality as utility maximization.
For some of the examples these will go together. For others, moving closer to the truth may be a utility loss—e.g. for political zealots whose friends and colleagues tend to be political zealots.
It’d be interesting to see a comparison between such cases. At the least, you’d want to vary the following:
Having a very high prior on X’s being true. Having a strong desire to believe X is true. Having a strong emotional response to X-style situations. The expected loss/gain in incorrectly believing X to be true/false.
Cultists and zealots will often have a strong incentive to believe some X even if it’s false, so it’s not clear the high prior is doing most/much of the work there.
With trauma-based situations, it also seems particularly important to consider utilities: more to lose in incorrectly thinking things are safe, than in incorrectly thinking they’re dangerous. When you start out believing something’s almost certainly very dangerous, you may be right. For a human, the utility-maximising move probably is to require more than the ‘correct’ amount of evidence to shift your belief (given that you’re impulsive, foolish, impatient… and so can’t necessarily be trusted to act in your own interests with an accurate assessment).
It’s also worth noting that habituation can be irrational. If you’re repeatedly in a situation where there’s good reason to expect a 0.1% risk of death, but nothing bad happens the first 200 times, then you’ll likely habituate to under-rate the risk—unless your awareness of the risk makes the experience of the situation appropriately horrifying each time.
On polar bears vs coyotes:
I don’t think it’s reasonable to label the …I saw a polar bear… sensation as “evidence for bear”. It’s weak evidence for bear. It’s stronger evidence for the beginning of a joke. For [polar bear] the [earnest report]:[joke] odds ratio is much lower than for [coyote].
I don’t think you need to bring in any irrational bias to get this result. There’s little shift in belief since it’s very weak evidence.
If your friend never makes jokes, then the point may be reasonable. (in particular, for your friend to mistakenly earnestly believe she saw a polar bear, it’s reasonable to assume that she already compensated for polar-bear unlikeliness; the same doesn’t apply if she’s telling a joke)
I think it’s important to distinguish between:
1) Rationality as truth-seeking.
2) Rationality as utility maximization.
For some of the examples these will go together. For others, moving closer to the truth may be a utility loss—e.g. for political zealots whose friends and colleagues tend to be political zealots.
It’d be interesting to see a comparison between such cases. At the least, you’d want to vary the following:
Having a very high prior on X’s being true.
Having a strong desire to believe X is true.
Having a strong emotional response to X-style situations.
The expected loss/gain in incorrectly believing X to be true/false.
Cultists and zealots will often have a strong incentive to believe some X even if it’s false, so it’s not clear the high prior is doing most/much of the work there.
With trauma-based situations, it also seems particularly important to consider utilities: more to lose in incorrectly thinking things are safe, than in incorrectly thinking they’re dangerous.
When you start out believing something’s almost certainly very dangerous, you may be right. For a human, the utility-maximising move probably is to require more than the ‘correct’ amount of evidence to shift your belief (given that you’re impulsive, foolish, impatient… and so can’t necessarily be trusted to act in your own interests with an accurate assessment).
It’s also worth noting that habituation can be irrational. If you’re repeatedly in a situation where there’s good reason to expect a 0.1% risk of death, but nothing bad happens the first 200 times, then you’ll likely habituate to under-rate the risk—unless your awareness of the risk makes the experience of the situation appropriately horrifying each time.
On polar bears vs coyotes:
I don’t think it’s reasonable to label the …I saw a polar bear… sensation as “evidence for bear”. It’s weak evidence for bear. It’s stronger evidence for the beginning of a joke. For [polar bear] the [earnest report]:[joke] odds ratio is much lower than for [coyote].
I don’t think you need to bring in any irrational bias to get this result. There’s little shift in belief since it’s very weak evidence.
If your friend never makes jokes, then the point may be reasonable. (in particular, for your friend to mistakenly earnestly believe she saw a polar bear, it’s reasonable to assume that she already compensated for polar-bear unlikeliness; the same doesn’t apply if she’s telling a joke)