@RI: Immoral, of course. A Friendly AI should not be a person. I would like to know at least enough about this “consciousness” business to ensure a Friendly AI doesn’t have (think it has) it. An even worse critical failure is if the AI’s models of people are people.
The most accurate possible map of a person will probably tend to be a person itself, for obvious reasons.
Why wouldn’t you want your AI to have feelings? I would want it to have feelings. When a superintelligence runs the world, I want it to be one that has feelings—perhaps feelings even much like my own.
As for the most accurate map being the territory, that’s such a basic error I don’t feel the need to explain it further. The territory is not a map; therefore it cannot be an accurate map.
@RI: Immoral, of course. A Friendly AI should not be a person. I would like to know at least enough about this “consciousness” business to ensure a Friendly AI doesn’t have (think it has) it. An even worse critical failure is if the AI’s models of people are people.
The most accurate possible map of a person will probably tend to be a person itself, for obvious reasons.
Why wouldn’t you want your AI to have feelings? I would want it to have feelings. When a superintelligence runs the world, I want it to be one that has feelings—perhaps feelings even much like my own.
As for the most accurate map being the territory, that’s such a basic error I don’t feel the need to explain it further. The territory is not a map; therefore it cannot be an accurate map.
pnrjulius: he answered this a little later: http://lesswrong.com/lw/x7/cant_unbirth_a_child/