But I didn’t say the AIs would be safe (or super-intelligent, for that matter)
This sort of disclaimer can protect in you in a discussion on the level of armchair philosophy, whose sole purpose is to show off how smart you are, but if you were to actually build an AI, and it went FOOM and tiled the universe with molecular smiley faces, taking all humans apart in the process, the fact that you didn’t claim the AI would be safe would not compel the universe to say “that’s all right, then” and hit a magic reset button to give you another chance. Which is why we ask the question “Is this AI safe?” and tend to not like ideas that result in a negative answer, even if the idea didn’t claim to address that concern.
This sort of disclaimer can protect in you in a discussion on the level of armchair philosophy, whose sole purpose is to show off how smart you are, but if you were to actually build an AI, and it went FOOM and tiled the universe with molecular smiley faces, taking all humans apart in the process, the fact that you didn’t claim the AI would be safe would not compel the universe to say “that’s all right, then” and hit a magic reset button to give you another chance. Which is why we ask the question “Is this AI safe?” and tend to not like ideas that result in a negative answer, even if the idea didn’t claim to address that concern.