Emile, you can’t prove that the chess moves outputted by a human chess player will be legal chess moves, and in the same way, you may be able to prove that about a regular chess playing program, but you will not be able to prove it for an AI that plays chess; an AI could try to cheat at chess when you’re not looking, just like a human being could.
Basically, a rigid restriction on the outputs, as in the chess playing program, proves you’re not dealing with something intelligent, since something intelligent can consider the possibility of breaking the rules. So if you can prove that the AI won’t turn the universe into paperclips, that shows that it is not even intelligent, let alone superintelligent.
This doesn’t mean that there are no restrictions at all on the output of an intelligent being, of course. It just means that the restrictions are too complicated for you to prove.
Emile, you can’t prove that the chess moves outputted by a human chess player will be legal chess moves, and in the same way, you may be able to prove that about a regular chess playing program, but you will not be able to prove it for an AI that plays chess; an AI could try to cheat at chess when you’re not looking, just like a human being could.
Basically, a rigid restriction on the outputs, as in the chess playing program, proves you’re not dealing with something intelligent, since something intelligent can consider the possibility of breaking the rules. So if you can prove that the AI won’t turn the universe into paperclips, that shows that it is not even intelligent, let alone superintelligent.
This doesn’t mean that there are no restrictions at all on the output of an intelligent being, of course. It just means that the restrictions are too complicated for you to prove.
I can’t imagine it being much more complicated than creating the thing itself.