Kaj, this is an excellent article focusing on why an AGI will have a hard time adopting an model of the world similar to the ones that humans have.
However, I think that Ben’s main hangup about the scary idea is that he doesn’t believe in the complexity and fragility of moral values. In this article he gives “Growth, Choice, and Joy” as a sufficient value system for friendliness. He knows that these terms “concept a vast mass of ambiguity, subtlety and human history,” but still, I think this is where Goertzel and SI differ.
Your key point (focusing on difficulty of sharing an ontology rather than the complexity of human values) is more or less novel, however, and you should publish it.
Kaj, this is an excellent article focusing on why an AGI will have a hard time adopting an model of the world similar to the ones that humans have.
However, I think that Ben’s main hangup about the scary idea is that he doesn’t believe in the complexity and fragility of moral values. In this article he gives “Growth, Choice, and Joy” as a sufficient value system for friendliness. He knows that these terms “concept a vast mass of ambiguity, subtlety and human history,” but still, I think this is where Goertzel and SI differ.
You may be right.
Your key point (focusing on difficulty of sharing an ontology rather than the complexity of human values) is more or less novel, however, and you should publish it.