The “everything might just turn out to be okay” proposition is under-specified. This is effectively asking for the likelihood of all known and unknown catastrophes not occurring. If you want to evaluate it, you need to specify what the dimensions of “okay” are, as well as their bounds. This becomes a series of smaller propositions, which is approximately the field of X-risk.
Regarding hedging for the worst: suppose we flipped that around. Isn’t it weird that people are okay with a 1% chance of dying horribly? Indeed evidence suggests that they are not—for scary things, people start getting anxious in the 0.01% range.
The “everything might just turn out to be okay” proposition is under-specified. This is effectively asking for the likelihood of all known and unknown catastrophes not occurring. If you want to evaluate it, you need to specify what the dimensions of “okay” are, as well as their bounds. This becomes a series of smaller propositions, which is approximately the field of X-risk.
Regarding hedging for the worst: suppose we flipped that around. Isn’t it weird that people are okay with a 1% chance of dying horribly? Indeed evidence suggests that they are not—for scary things, people start getting anxious in the 0.01% range.