Yes, we are, in fact, safer than in the EEA, in contemporary USA.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway. So, don’t go rubbishing the heuristic of being frightened of potentially real danger.
I think it would only be legitimate to criticize fear itself on “outside view” grounds if we lived in a world with very little actual danger, which is not at all the case.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway.
So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it’s not fatal.
So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you’re reasoning fallaciously.
Now, one might respond- “but wait! This 1e-6 risk is 1e5 times worse than the 1e-2 risk!”. But that seems to fall into the traps of visibility bias and privileging the hypothesis. If you’re considering a 1e-6 risk, have you worked out not just all the higher order risks, but also all of the lower order risks that might have higher order impact? And so when you have an idea like the one in question, which I would give a risk of 1e-20 for discussion’s sake, and you consider it without also bringing into your calculus essentially every other risk possible, you’re not doing it rigorously. And, of course, humans can’t do that computation.
Now, the kicker here is that we’re talking about fear. I might fear the loss of every person I know just as strongly as I fear the loss of every person that exists, but be willing to do more to prevent the loss of everyone that exists (because that loss is actually larger). Fear has psychological ramifications, not decision-theoretic ones. If this idea has 1e-20 chances of coming to pass, you can ignore it on a fear level, and if you aren’t, then I’m willing to consider that evidence you need help coping with fear.
I have a healthy respect for the adaptive aspects of fear. However, we do need an explanation for the scale and prevalence of irrational paranoia.
The picture of an ancestral water hole surrounded by predators helps us to understand the origins of the phenomenon. The ancestral environment was a dangerous and nasty place where people led short, brutish lives. There, living in constant fear made sense.
Yes, we are, in fact, safer than in the EEA, in contemporary USA.
But still, there are some real places where danger is real, like the Bronx or scientology or organized crime or a walking across a freeway. So, don’t go rubbishing the heuristic of being frightened of potentially real danger.
I think it would only be legitimate to criticize fear itself on “outside view” grounds if we lived in a world with very little actual danger, which is not at all the case.
So, this may be a good way to approach the issue: loss to individual humans is, roughly speaking, finite. Thus, the correct approach to fear is to gauge risks by their chance of loss, and then discount if it’s not fatal.
So, we should be much less worried by a 1e-6 risk than a 1e-4 risk, and a 1e-4 risk than a 1e-2 risk. If you are more scared by a 1e-6 risk than a 1e-2 risk, you’re reasoning fallaciously.
Now, one might respond- “but wait! This 1e-6 risk is 1e5 times worse than the 1e-2 risk!”. But that seems to fall into the traps of visibility bias and privileging the hypothesis. If you’re considering a 1e-6 risk, have you worked out not just all the higher order risks, but also all of the lower order risks that might have higher order impact? And so when you have an idea like the one in question, which I would give a risk of 1e-20 for discussion’s sake, and you consider it without also bringing into your calculus essentially every other risk possible, you’re not doing it rigorously. And, of course, humans can’t do that computation.
Now, the kicker here is that we’re talking about fear. I might fear the loss of every person I know just as strongly as I fear the loss of every person that exists, but be willing to do more to prevent the loss of everyone that exists (because that loss is actually larger). Fear has psychological ramifications, not decision-theoretic ones. If this idea has 1e-20 chances of coming to pass, you can ignore it on a fear level, and if you aren’t, then I’m willing to consider that evidence you need help coping with fear.
I have a healthy respect for the adaptive aspects of fear. However, we do need an explanation for the scale and prevalence of irrational paranoia.
The picture of an ancestral water hole surrounded by predators helps us to understand the origins of the phenomenon. The ancestral environment was a dangerous and nasty place where people led short, brutish lives. There, living in constant fear made sense.