Carl: This point is elementary. A âfriendâ who seeks to transform himself into somebody who wants to hurt you, is not your friend.”
The switch from “friendly” (having kindly interest and goodwill; not hostile) to a “friend” (one attached to another by affection or esteem) is problematic. To me it radically distorts the meaning of FAI and makes this pithy little sound-bite irrelevant. I don’t think it helps Bostrom’s position to overload the concept of friendship with the connotations of close friendship.
Exactly how much human bias and irrationality is needed to sustain our human concept of “friend”, and is that a level of irrationality that we’d want in a superintelligence? Can the human concept of friendship (involving extreme loyalty and trust in someone we’ve happened to have known for some time and perhaps profitably exchanged favours with) be applied to the relationship between a computer and a whole species?
I can cope with the concept of “friendly” AI (kind to humans and non hostile), but I have difficulty applying the distinct English word “friend” to an AI.
Suggested listening: Tim Minchin—If I Didn’t Have You
http://www.youtube.com/watch?v=Gaid72fqzNE
Correction: I don’t think it helps Bostrom’s position to overload the concept of friendship friendly with the connotations of close friendship.