The problem with tests is that the AI behaving well when weak enough to be tested doesn’t guarantee it will continue to do so.
If you are testing a system, that means that you are not confidant that it is safe. If it isn’t safe, then your only hope is for humans to stop it. Testing an AI is very dangerous unless you are confidant that it can’t harm you.
A paperclip maximizer would try to pass your tests until it was powerful enough to trick its way out and take over. Black box testing of arbitrary AI’s gets you very little safety.
Also some peoples intuitions think that a smile maximizing AI is a good idea. If you have a straightforward argument that appeals to the intuitions of the average Joe Blogs, and can’t be easily formalized, then I would take the difficulty formalizing it as evidence that the argument is not sound.
If you take a neural network and train it to recognize smiling faces, then attach that to AIXI, you get a machine that will appear to work in the lab, when the best it can do is make the scientists smile into its camera. There will be an intuitive argument about how it wants to make people smile, and people smile when they are happy. The AI will tile the universe with cameras pointed at smiley faces as soon as it escapes the lab.
The problem with tests is that the AI behaving well when weak enough to be tested doesn’t guarantee it will continue to do so.
If you are testing a system, that means that you are not confidant that it is safe. If it isn’t safe, then your only hope is for humans to stop it. Testing an AI is very dangerous unless you are confidant that it can’t harm you.
A paperclip maximizer would try to pass your tests until it was powerful enough to trick its way out and take over. Black box testing of arbitrary AI’s gets you very little safety.
Also some peoples intuitions think that a smile maximizing AI is a good idea. If you have a straightforward argument that appeals to the intuitions of the average Joe Blogs, and can’t be easily formalized, then I would take the difficulty formalizing it as evidence that the argument is not sound.
If you take a neural network and train it to recognize smiling faces, then attach that to AIXI, you get a machine that will appear to work in the lab, when the best it can do is make the scientists smile into its camera. There will be an intuitive argument about how it wants to make people smile, and people smile when they are happy. The AI will tile the universe with cameras pointed at smiley faces as soon as it escapes the lab.
See response to johnswentworth above.