I’m not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it’s unable to X, on the other. About the same as between coming up with a true conjecture, and making a proof, except larger
Ah, I see what you mean now. At least, I think I do. OK, fair enough.
Ah, I see what you mean now. At least, I think I do. OK, fair enough.