Eliezer: If you create a friendly AI, do you think it will shortly thereafter kill you? If not, why not?
At present, Eliezer cannot functionally describe what ‘Friendliness’ would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI.
Imagine if Isaac Asimov not only lacked the ability to specify how the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed. No one can find problems with the concept because it’s not substantive enough—it is essentially nothing but one huge, undefined problem.
But not a technical one. It is impossible to determine how difficult it might be to reach a goal if you cannot define what goal you’re reaching towards. No amount of technological development or acquired skill will help if Eliezer does not first define what he’s trying to accomplish, which makes his ‘research’ into the subject rather pointless.
Presumably he wants us to stop thinking and send money.
At present, Eliezer cannot functionally describe what ‘Friendliness’ would actually entail. It is likely that any outcome he views as being undesirable (including, presumably, his murder) would be claimed to be impermissible for a Friendly AI.
Imagine if Isaac Asimov not only lacked the ability to specify how the Laws of Robotics were to be implanted in artificial brains, but couldn’t specify what those Laws were supposed to be. You would essentially have Eliezer. Asimov specified his Laws enough for himself and others to be able to analyze them and examine their consequences, strengths, and weaknesses, critically. ‘Friendly AI’ is not so specified and cannot be analyzed. No one can find problems with the concept because it’s not substantive enough—it is essentially nothing but one huge, undefined problem.
But not a technical one. It is impossible to determine how difficult it might be to reach a goal if you cannot define what goal you’re reaching towards. No amount of technological development or acquired skill will help if Eliezer does not first define what he’s trying to accomplish, which makes his ‘research’ into the subject rather pointless.
Presumably he wants us to stop thinking and send money.