non-consensually killing vast amounts of people and their children for some chance of improving one’s own longevity.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.
I think this misrepresents the scenario since AGI presumably won’t just improve my own longevity: it will presumably improve most people’s longevity (assuming it does that at all), in addition to all the other benefits that AGI would provide the world. Also, both potential decisions are “unilateral”: if some group forcibly stops AGI development, they’re causing everyone else to non-consensually die from old age, by assumption.
I understand you have the intuition that there’s an important asymmetry here. However, even if that’s true, I think it’s important to strive to be accurate when describing the moral choice here.
I agree that potentially the benefits can go to everyone. The point is that as the person pursuing AGI you are making the choice for everyone else.
The asymmetry is that if you do something that creates risk for everyone else, I believe that does single you out as an aggressor? While conversely, enforcing norms that prevent such risky behavior seems justified. The fact that by default people are mortal is tragic, but doesn’t have much bearing here. (You’d still be free to pursue life-extension technology in other ways, perhaps including limited AI tools).
Ideally, of course, there’d be some sort of democratic process here that let’s people in aggregate make informed (!) choices. In the real world, it’s unclear what a good solution here would be. What we have right now is the big labs creating facts that society has trouble catching up with, which I think many people are reasonably uncomfortable with.