I’ll take the point about misuse not being clear, and I’ve made a 3 word edit to the text to cover your point.
However, I do also state prior to this that:
“This argument is based on a pathway toward AGI. That is, while it will focus on the endpoint, where an AGI is created, it is likely that issues around resource distribution and relative power shifts within the international system caused by AI will come well before the development of AGI.”
If anything your post above bolsters my argument. If states do not share resources they’ll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn’t have access to resources.
I’ll take the point about misuse not being clear, and I’ve made a 3 word edit to the text to cover your point.
However, I do also state prior to this that:
If anything your post above bolsters my argument. If states do not share resources they’ll be in competition with each other to work toward AGI, and everything before it, creating risks. If they do share resources, they create risks. However, it is logical for disadvantaged states to increase other security risks, through restoring a balance of power, in response to the >AGI ones. If state A manages to nail alignment research and develop AGI, my framework is still valid, state B may respond defensibly if it doesn’t have access to resources.