“The lack of resource distribution has a twofold problem:
There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI could increase the risk of misuse leading to a possible reduction in diffusion.”
Your argument is that only certain states should develop AGI, and while that makes sense on the one hand, you’re not accounting for the increase in how others will react to the non-diffusion of AI. I’m not arguing for the wider distribution of AI, rather I’m pointing out how others will react to being disadvantaged. Which is also a dumb idea since it will just cause increased competition between states and less regulation at the international level, therefore increasing the risks from AGI and AI in general.
The AGI Ruin argument isn’t about misuse. The problem isn’t misuse, it’s uncontrollability. Basically, each incremental step along the way towards AGI is benefits whoever builds it, except that after a certain point AGI takes a treacherous turn and kills literally-everyone. So the claim isn’t that only certain states should develop AGI, it’s that no one should develop AGI (until alignment research is considerably further along).
I’ll take the point about misuse not being clear, and I’ve made a 3 word edit to the text to cover your point.
However, I do also state prior to this that:
“This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI.”
If anything your post above bolsters my argument. If states do not share resources they’ll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn’t have access to resources.
I’ve covered that, did you read it?
“The lack of resource distribution has a twofold problem:
There is a need for agreement on the distribution of AI resources. However, a wider diffusion of AI could increase the risk of misuse leading to a possible reduction in diffusion.”
Your argument is that only certain states should develop AGI, and while that makes sense on the one hand, you’re not accounting for the increase in how others will react to the non-diffusion of AI. I’m not arguing for the wider distribution of AI, rather I’m pointing out how others will react to being disadvantaged. Which is also a dumb idea since it will just cause increased competition between states and less regulation at the international level, therefore increasing the risks from AGI and AI in general.
The AGI Ruin argument isn’t about misuse. The problem isn’t misuse, it’s uncontrollability. Basically, each incremental step along the way towards AGI is benefits whoever builds it, except that after a certain point AGI takes a treacherous turn and kills literally-everyone. So the claim isn’t that only certain states should develop AGI, it’s that no one should develop AGI (until alignment research is considerably further along).
I’ll take the point about misuse not being clear, and I’ve made a 3 word edit to the text to cover your point.
However, I do also state prior to this that:
If anything your post above bolsters my argument. If states do not share resources they’ll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn’t have access to resources.