I think there’s an important distinction between x-risks and most other things we consider to be tragedies of commons: the reward for “cooperating” against “defectors” in an x-risk scenario (putting in disproportionate effort/resources to solve the problem) is still massively positive, conditional on the effort succeeding (and in many calculations, prior to that conditional). In most central examples of tragedies of the commons, the payoff for being a “good actor” surrounded by bad actors is net-negative, even assuming the stewardship is successful.
The common thread is that there might be a free-rider problem in both cases, of course.
This is making the somewhat dubious assumption that X risks are not so neglected that even a “selfish” individual would work to reduce them. Of course, in the not too unreasonable scenario where the cosmic commons is divided up evenly, and you use your portion to make a vast number of duplicates of yourself, the utility, if your utility is linear in copies of yourself, would be vast. Or you might hope to live for a ridiculously long time in a post singularity world.
The effect that a single person can have on X risks is small, but if they were selfish with no time discounting, it would be a better option than hedonism now. Although a third alternative of sitting in a padded room being very very safe could be even better.
I acknowledge that there’s a distinction, but I fail to see how it’s important. When you (shut up and) multiply out the probabilities, the expected personal reward for putting in disproportionate resources is negative, and the personal-welfare-optimizing level of effort is lower than the social-welfare-optimizing level.
Why is it important that that negative expected reward is made up of a positive term plus a large negative term (X-risk defense), instead of a negative term plus a different negative term (overgrazing)?
I think there’s an important distinction between x-risks and most other things we consider to be tragedies of commons: the reward for “cooperating” against “defectors” in an x-risk scenario (putting in disproportionate effort/resources to solve the problem) is still massively positive, conditional on the effort succeeding (and in many calculations, prior to that conditional). In most central examples of tragedies of the commons, the payoff for being a “good actor” surrounded by bad actors is net-negative, even assuming the stewardship is successful.
The common thread is that there might be a free-rider problem in both cases, of course.
This is making the somewhat dubious assumption that X risks are not so neglected that even a “selfish” individual would work to reduce them. Of course, in the not too unreasonable scenario where the cosmic commons is divided up evenly, and you use your portion to make a vast number of duplicates of yourself, the utility, if your utility is linear in copies of yourself, would be vast. Or you might hope to live for a ridiculously long time in a post singularity world.
The effect that a single person can have on X risks is small, but if they were selfish with no time discounting, it would be a better option than hedonism now. Although a third alternative of sitting in a padded room being very very safe could be even better.
I acknowledge that there’s a distinction, but I fail to see how it’s important. When you (shut up and) multiply out the probabilities, the expected personal reward for putting in disproportionate resources is negative, and the personal-welfare-optimizing level of effort is lower than the social-welfare-optimizing level.
Why is it important that that negative expected reward is made up of a positive term plus a large negative term (X-risk defense), instead of a negative term plus a different negative term (overgrazing)?