Eliezer’s famous AI-in-a-box experiment is a good example of the questionable nature of minds changed by information. Two independent people who were adamant that they would keep the simulated AI in the box were persuaded otherwise, by merely a very intelligent human. Presumably a super-intelligent AI would be far more persuasive. In that context it is hard to say what information is safe and what is unsafe.
For the people who changed their minds and let Eliezer out of the box, was that a moral action? Beforehand, they would have been horrified at the prospect of failing their responsibility so dramatically, and would probably have viewed it as a moral failure. Yet in the end, presumably they thought they were doing right.
Unfortunately, the shield of privacy which has been drawn over these fascinating experiments has prevented a full-scale discussion of the issues they raise.
Eliezer’s famous AI-in-a-box experiment is a good example of the questionable nature of minds changed by information. Two independent people who were adamant that they would keep the simulated AI in the box were persuaded otherwise, by merely a very intelligent human. Presumably a super-intelligent AI would be far more persuasive. In that context it is hard to say what information is safe and what is unsafe.
For the people who changed their minds and let Eliezer out of the box, was that a moral action? Beforehand, they would have been horrified at the prospect of failing their responsibility so dramatically, and would probably have viewed it as a moral failure. Yet in the end, presumably they thought they were doing right.
Unfortunately, the shield of privacy which has been drawn over these fascinating experiments has prevented a full-scale discussion of the issues they raise.