If I read this right it seems to come down, at it’s core, to building a narrow AI with advanced surveillance capabilities that can be used to direct the application of conventional force to gain a generic strategic advantage that could be used, among other things, to police AGI development.
Does that seem like a fair summary for cramming it all into a single sentence?
It seems plausible, yes. Certainly I expect militaries to build stronger surveillance capabilities and those will in part be based on using AI to make sense of the large corpus of data; whether or not they will use those for AI policing purposes seems highly uncertain to me at the present but seems like something people in the AI policy space could reasonable push for if they think it will be a worthwhile intervention (I try to refrain from speculating on beneficial policy directions too much myself since it’s not a space I feel I understand enough about to not make recommendations that will blow up in my face for unexpected reasons).
There could be some intermediate step, like narrow AI increase efficiency of the nuclear planning, which provide world domination, and based on it, global “Turing police” is implemented which prevent research in more advanced forms of AI.
If I read this right it seems to come down, at it’s core, to building a narrow AI with advanced surveillance capabilities that can be used to direct the application of conventional force to gain a generic strategic advantage that could be used, among other things, to police AGI development.
Does that seem like a fair summary for cramming it all into a single sentence?
Yes. Do you think it could happen?
It seems plausible, yes. Certainly I expect militaries to build stronger surveillance capabilities and those will in part be based on using AI to make sense of the large corpus of data; whether or not they will use those for AI policing purposes seems highly uncertain to me at the present but seems like something people in the AI policy space could reasonable push for if they think it will be a worthwhile intervention (I try to refrain from speculating on beneficial policy directions too much myself since it’s not a space I feel I understand enough about to not make recommendations that will blow up in my face for unexpected reasons).
There could be some intermediate step, like narrow AI increase efficiency of the nuclear planning, which provide world domination, and based on it, global “Turing police” is implemented which prevent research in more advanced forms of AI.