Prase, I think I would agree with that. But it seems Eliezer isn’t quite seeing is that even if mind-space in general is completely arbitrary, people programming an AI aren’t going to program something completely arbitrary. They’re going to program it to use assumptions and ways of argument that they find acceptable, and so it will also draw conclusions that they find acceptable, even if it does this better than they do themselves.
Also, Eliezer’s conclusion, “And then Wright converted to Christianity—yes, seriously. So you really don’t want to fall into this trap!” seems to suggest that a world where the AI converts everyone to Christianity is worse than a world that the AI fills with paperclips, by suggesting that converting to Christianity is the worst thing that can happen to you. I wonder if Eliezer really believes this, and would rather be made into paperclips than into a Christian?
Prase, I think I would agree with that. But it seems Eliezer isn’t quite seeing is that even if mind-space in general is completely arbitrary, people programming an AI aren’t going to program something completely arbitrary. They’re going to program it to use assumptions and ways of argument that they find acceptable, and so it will also draw conclusions that they find acceptable, even if it does this better than they do themselves.
Also, Eliezer’s conclusion, “And then Wright converted to Christianity—yes, seriously. So you really don’t want to fall into this trap!” seems to suggest that a world where the AI converts everyone to Christianity is worse than a world that the AI fills with paperclips, by suggesting that converting to Christianity is the worst thing that can happen to you. I wonder if Eliezer really believes this, and would rather be made into paperclips than into a Christian?