Yes we can. Superintelligences have abilities that normal intelligences do not.
Imagine a game of chess. A good AI will make vastly different moves than a bad AI or a human. More skilled players would easily be detectable. They would make very different moves.
An AI that is indistinguishable from a human (to an even greater superintelligent AI) is not dangerous, because humans are not dangerous. Just like a chess master that is indistinguishable from a regular player wouldn’t win many games.
Yes we can. Superintelligences have abilities that normal intelligences do not.
Imagine a game of chess. A good AI will make vastly different moves than a bad AI or a human. More skilled players would easily be detectable. They would make very different moves.
But in some games it is better to look more stupid in the begging. Like poker, espionage and AI box experiment.
An AI that is indistinguishable from a human (to an even greater superintelligent AI) is not dangerous, because humans are not dangerous. Just like a chess master that is indistinguishable from a regular player wouldn’t win many games.
It may be indistinguishable until it gets our of the building. Recent movie Ex Machine had such plot.
The AI doesn’t want to escape from the building. It’s utility function is basically to mimic humans. It’s a terminal value, not a subgoal.
But most humans would like to escape from any confinement