I made the familiar counter-argument that this is irrelevant, because nobody is advocating building a random mind. Rather, what some of us are suggesting is to build a mind with a Friendly-looking goal system, and a cognitive architecture that’s roughly human-like in nature but with a non-human-like propensity to choose its actions rationally based on its goals, and then raise this AGI mind in a caring way and integrate it into society. Arguments against the Friendliness of random minds are irrelevant as critiques of this sort of suggestion.
If Ben is right on this point, wouldn’t this lead to the conclusion that human enhancement would be a better approach to Friendly superintelligence than AI programming? We don’t have much clue how to go about raising a computer program in a caring way and integrating it into society; but we do manage to do this with at least some highly intelligent human children.
If Ben is right on this point, wouldn’t this lead to the conclusion that human enhancement would be a better approach to Friendly superintelligence than AI programming? We don’t have much clue how to go about raising a computer program in a caring way and integrating it into society; but we do manage to do this with at least some highly intelligent human children.