I think a position some AI safety people have is:
“Powerful AI is necessary to take a pivotal act.”
I can buy that it is impossible to safely have an AI make extremely advanced progress in ie nanotechnology. But it seems somewhat surprising to me if you need a general AI to stop anyone else from making a general AI.
Political solutions for example certainly seem very hard, but… all solutions seem very hard? The reasons AI-based solutions are hard don’t seem obviously weaker than political-based solutions to me or anything.
(The described position is probably a strawman, mostly posting this to help further my own thought than as a criticism of anyone else in particular or anything.)
I think a position some AI safety people have is: “Powerful AI is necessary to take a pivotal act.”
I can buy that it is impossible to safely have an AI make extremely advanced progress in ie nanotechnology. But it seems somewhat surprising to me if you need a general AI to stop anyone else from making a general AI.
Political solutions for example certainly seem very hard, but… all solutions seem very hard? The reasons AI-based solutions are hard don’t seem obviously weaker than political-based solutions to me or anything.
(The described position is probably a strawman, mostly posting this to help further my own thought than as a criticism of anyone else in particular or anything.)