I think that worrisomeness should also factor in our ability to do anything about the problem.
If I’m selfish, then I don’t particularly need to worry about global catastrophic risks that will kill (almost) everyone—I’d just die and there’s nothing I can do about it. I’d worry more about risks that are survivable, since they might require some preparation.
If I’m altruistic then I don’t particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I’d have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.
(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then “selfish” people would behave more like the “altruistic” people in the above paragraph).
Well, if you’re selfish you’d assign more or less the same utility to all states of the world in which you’re dead (unless you believe in afterlife), and in any event you’d assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.
I think that worrisomeness should also factor in our ability to do anything about the problem.
If I’m selfish, then I don’t particularly need to worry about global catastrophic risks that will kill (almost) everyone—I’d just die and there’s nothing I can do about it. I’d worry more about risks that are survivable, since they might require some preparation.
If I’m altruistic then I don’t particularly need to worry about risks that are inevitable, or where there is already well-funded and sane mitigation effort going on (since I’d have very little individual ability to make a difference to the probability). I might worry more about risks that have a lower expected disutility but where the mitigation effort is drastically underfunded.
(This is assuming real-world decision theory degenerates into something like CDT; if instead we adopt a more sophisticated decision theory and suppose there are enough other people in our reference class then “selfish” people would behave more like the “altruistic” people in the above paragraph).
Well, if you’re selfish you’d assign more or less the same utility to all states of the world in which you’re dead (unless you believe in afterlife), and in any event you’d assign a higher probability to a particular risk given that “the mitigation effort is drastically underfunded” than given that “there is already well-funded and sane mitigation effort going on”, but you do have a point.