Richard, thanks for your reply. Just for reference, I think this goes under argument 5, right?
It’s a powerful argument, but I think it’s not watertight. I would counter it as follows:
As stated above, I think the aim should be an ideally global treaty were no country is allowed to go beyond a certain point of research. The countries should then enforce the treaty on all research institutes/companies within their borders. You’re right that in this case, a criminal or terrorist group will have an edge. But seeing how hard it currently is for legally allowed and indeed heavily funded groups to develop AGI, I’m not convinced that terrorist or criminal groups could easily do this. For reference, I read this paper by a lawyer this week on an actual way to implement such a treaty. I think signing such a treaty will not affect countries without effective AGI research capabilities, so they won’t have a reason not to sign it, and will benefit from the increased existential safety. The ones likely least inclined to sign up will be the countries that are trying to develop AGI now. So effectively, I think a global treaty and a US/China deal would amount to roughly the same thing.
You could make the same argument for tax, (not profitable) climate action, R&D, defense spending against a common enemy, and probably many other issues. Does that mean we have zero tax, climate action, R&D, or defense? No, because at some point countries realize it’s better to not be the relative winner, than to all loose. In many cases this is then formalizedintreaties, with varying but nonzero success. I think that could work in this case as well. Your argument is indeed a problem in all of the fields I mention, so you have a point. But I think, fortunately, it’s not a decisive point.
Richard, thanks for your reply. Just for reference, I think this goes under argument 5, right?
It’s a powerful argument, but I think it’s not watertight. I would counter it as follows:
As stated above, I think the aim should be an ideally global treaty were no country is allowed to go beyond a certain point of research. The countries should then enforce the treaty on all research institutes/companies within their borders. You’re right that in this case, a criminal or terrorist group will have an edge. But seeing how hard it currently is for legally allowed and indeed heavily funded groups to develop AGI, I’m not convinced that terrorist or criminal groups could easily do this. For reference, I read this paper by a lawyer this week on an actual way to implement such a treaty. I think signing such a treaty will not affect countries without effective AGI research capabilities, so they won’t have a reason not to sign it, and will benefit from the increased existential safety. The ones likely least inclined to sign up will be the countries that are trying to develop AGI now. So effectively, I think a global treaty and a US/China deal would amount to roughly the same thing.
You could make the same argument for tax, (not profitable) climate action, R&D, defense spending against a common enemy, and probably many other issues. Does that mean we have zero tax, climate action, R&D, or defense? No, because at some point countries realize it’s better to not be the relative winner, than to all loose. In many cases this is then formalized in treaties, with varying but nonzero success. I think that could work in this case as well. Your argument is indeed a problem in all of the fields I mention, so you have a point. But I think, fortunately, it’s not a decisive point.