being terrified of very unlikely terrible events is a known human failure mode
one wonders how something like that might have evolved, doesn’t one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?
one wonders how something like that might have evolved, doesn’t one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?
I don’t see how very unlikely events that people knew the probability of would have been part of the evolutionary environment at all.
In fact, I would posit that the bias is most likely due to having a very high floor for probability. In the evolutionary environment things with probability you knew to be <1% would be unlikely to ever be brought to your attention. So not having any good method for intuitively handling probabilities between 1% and zero would be expected.
In fact, I don’t think I have an innate handle on probability to any finer grain than ~10% increments. Anything more than that seems to require mathematical thought.
But probably far more than 1% of cave-men who chose to seek out a sabre-tooth tiger to see if they were friendly died due to doing so.
The relevant question on an issue of personal safety isn’t “What % of the population die due to trying this?”
The relevant question is: “What % of the people who try this will die?”
In the first case, rollerskating downhill, while on fire, after having taken arsenic would seem safe (as I suspect no-one has ever done precisely that)
one wonders how something like that might have evolved, doesn’t one?
No, really, one doesn’t wonder. It’s pretty obvious. But if we’ve gotten to the point where “this bias paid off in the evolutionary environment!” is actually used as an argument, then we are off the rails of refining human rationality.
What’s wrong with using “this bias paid off in the evolutionary environment!” as an argument? I think people who paid more attention to this might make fewer mistakes, especially in domains where there isn’t a systematic, exploitable difference between EEA and now.
The evolutionary environment contained enetities capable of dishing out severe punishments, unertainty, etc.
If anything, I think that the heuristic that an idea “obviously” can’t be dangerous is the problem, not the heuristic that one should take care around possibilities of strong penalites.
It is a fine argument for explaining the widespread occcurrence of fear. However, today humans are in an environment where their primitive paranoia is frequently triggered by inappropriate stimulii.
He says “we” are the healthiest and safest humans ever to live, but I’m very skeptical that this refers specifically to Americans rather than present day first world nation citizens in general.
Yes, we are, in fact, safer than in the EEA, in contemporary USA.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway. So, don’t go rubbishing the heuristic of being frightened of potentially real danger.
I think it would only be legitimate to criticize fear itself on “outside view” grounds if we lived in a world with very little actual danger, which is not at all the case.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway.
So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it’s not fatal.
So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you’re reasoning fallaciously.
Now, one might respond- “but wait! This 1e-6 risk is 1e5 times worse than the 1e-2 risk!”. But that seems to fall into the traps of visibility bias and privileging the hypothesis. If you’re considering a 1e-6 risk, have you worked out not just all the higher order risks, but also all of the lower order risks that might have higher order impact? And so when you have an idea like the one in question, which I would give a risk of 1e-20 for discussion’s sake, and you consider it without also bringing into your calculus essentially every other risk possible, you’re not doing it rigorously. And, of course, humans can’t do that computation.
Now, the kicker here is that we’re talking about fear. I might fear the loss of every person I know just as strongly as I fear the loss of every person that exists, but be willing to do more to prevent the loss of everyone that exists (because that loss is actually larger). Fear has psychological ramifications, not decision-theoretic ones. If this idea has 1e-20 chances of coming to pass, you can ignore it on a fear level, and if you aren’t, then I’m willing to consider that evidence you need help coping with fear.
I have a healthy respect for the adaptive aspects of fear. However, we do need an explanation for the scale and prevalence of irrational paranoia.
The picture of an ancestral water hole surrounded by predators helps us to understand the origins of the phenomenon. The ancestral environment was a dangerous and nasty place where people led short, brutish lives. There, living in constant fear made sense.
He always held that panic was the best means of survival. Back in the old days, his theory went, people faced with hungry sabre-toothed tigers could be divided into those who panicked and those who stood there saying, “What a magnificent brute!” or “Here pussy”.
one wonders how something like that might have evolved, doesn’t one? What happened to all the humans who came with the mutation that made them want to find out whether the sabre-toothed tiger was friendly?
I don’t see how very unlikely events that people knew the probability of would have been part of the evolutionary environment at all.
In fact, I would posit that the bias is most likely due to having a very high floor for probability. In the evolutionary environment things with probability you knew to be <1% would be unlikely to ever be brought to your attention. So not having any good method for intuitively handling probabilities between 1% and zero would be expected.
In fact, I don’t think I have an innate handle on probability to any finer grain than ~10% increments. Anything more than that seems to require mathematical thought.
Probably less than 1% of cave-men died by actively seeking out the sabre-toothed tiger to see if it was friendly. But I digress.
But probably far more than 1% of cave-men who chose to seek out a sabre-tooth tiger to see if they were friendly died due to doing so.
The relevant question on an issue of personal safety isn’t “What % of the population die due to trying this?”
The relevant question is: “What % of the people who try this will die?”
In the first case, rollerskating downhill, while on fire, after having taken arsenic would seem safe (as I suspect no-one has ever done precisely that)
No, really, one doesn’t wonder. It’s pretty obvious. But if we’ve gotten to the point where “this bias paid off in the evolutionary environment!” is actually used as an argument, then we are off the rails of refining human rationality.
What’s wrong with using “this bias paid off in the evolutionary environment!” as an argument? I think people who paid more attention to this might make fewer mistakes, especially in domains where there isn’t a systematic, exploitable difference between EEA and now.
The evolutionary environment contained enetities capable of dishing out severe punishments, unertainty, etc.
If anything, I think that the heuristic that an idea “obviously” can’t be dangerous is the problem, not the heuristic that one should take care around possibilities of strong penalites.
It is a fine argument for explaining the widespread occcurrence of fear. However, today humans are in an environment where their primitive paranoia is frequently triggered by inappropriate stimulii.
Dan Gardener goes into this in some detail in his book: Risk: The Science and Politics of Fear
Video of Dan discussing the topic: Author Daniel Gardner says Americans are the healthiest and safest humans in the world, but are irrationally plagued by fear. He talks with Maggie Rodriguez about his book ‘The Science Of Fear.’
He says “we” are the healthiest and safest humans ever to live, but I’m very skeptical that this refers specifically to Americans rather than present day first world nation citizens in general.
Yes, we are, in fact, safer than in the EEA, in contemporary USA.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway. So, don’t go rubbishing the heuristic of being frightened of potentially real danger.
I think it would only be legitimate to criticize fear itself on “outside view” grounds if we lived in a world with very little actual danger, which is not at all the case.
So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it’s not fatal.
So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you’re reasoning fallaciously.
Now, one might respond- “but wait! This 1e-6 risk is 1e5 times worse than the 1e-2 risk!”. But that seems to fall into the traps of visibility bias and privileging the hypothesis. If you’re considering a 1e-6 risk, have you worked out not just all the higher order risks, but also all of the lower order risks that might have higher order impact? And so when you have an idea like the one in question, which I would give a risk of 1e-20 for discussion’s sake, and you consider it without also bringing into your calculus essentially every other risk possible, you’re not doing it rigorously. And, of course, humans can’t do that computation.
Now, the kicker here is that we’re talking about fear. I might fear the loss of every person I know just as strongly as I fear the loss of every person that exists, but be willing to do more to prevent the loss of everyone that exists (because that loss is actually larger). Fear has psychological ramifications, not decision-theoretic ones. If this idea has 1e-20 chances of coming to pass, you can ignore it on a fear level, and if you aren’t, then I’m willing to consider that evidence you need help coping with fear.
I have a healthy respect for the adaptive aspects of fear. However, we do need an explanation for the scale and prevalence of irrational paranoia.
The picture of an ancestral water hole surrounded by predators helps us to understand the origins of the phenomenon. The ancestral environment was a dangerous and nasty place where people led short, brutish lives. There, living in constant fear made sense.
Someone’s been reading Terry Pratchett.