The question IIRC wasn’t about the most worrisome, but about the most likely—it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability.
(ETA: I’m assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)
I think that worrisomeness should also factor in our ability to do anything about the problem.
If I’m selfish, then I don’t particularly need to worry about global catastrophic risks that will kill (almost) everyone—I’d just die and there’s nothing I can do about it. I’d worry more about risks that are survivable, since they might require some preparation.
If I’m altruistic then I don’t particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I’d have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.
(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then “selfish” people would behave more like the “altruistic” people in the above paragraph).
Well, if you’re selfish you’d assign more or less the same utility to all states of the world in which you’re dead (unless you believe in afterlife), and in any event you’d assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.
The question IIRC wasn’t about the most worrisome, but about the most likely—it is not inconsistent to assign to uFAI (say) 1000 times the disutility of nuclear war but only 0.5 times its probability.
(ETA: I’m assuming worrisomeness is defined as the product of probability times disutility, or a monotonic function thereof.)
I think that worrisomeness should also factor in our ability to do anything about the problem.
If I’m selfish, then I don’t particularly need to worry about global catastrophic risks that will kill (almost) everyone—I’d just die and there’s nothing I can do about it. I’d worry more about risks that are survivable, since they might require some preparation.
If I’m altruistic then I don’t particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I’d have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.
(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then “selfish” people would behave more like the “altruistic” people in the above paragraph).
Well, if you’re selfish you’d assign more or less the same utility to all states of the world in which you’re dead (unless you believe in afterlife), and in any event you’d assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.