On the subject of morality in robots, I would assume that when (if?) we devise a working cognitive model of an A.I. that would be indistinct from a human in every observable circumstance, the chances of it developing/learning sociopathic behaviour would be no different from a human developing psychopathic tendencies (which, although I can provide no scientific proof, I imagine is in the minority).
I know this is an abstraction that doesn’t do justice to the work people are doing on working towards this model, but I think the complexities of AI are one of the things that lead certain people to the knee-jerk reaction that all post-singularity AIs will want to exterminate the human race. (possessing a phobia because you don’t understand something etc etc...)
On the subject of morality in robots, I would assume that when (if?) we devise a working cognitive model of an A.I. that would be indistinct from a human in every observable circumstance, the chances of it developing/learning sociopathic behaviour would be no different from a human developing psychopathic tendencies (which, although I can provide no scientific proof, I imagine is in the minority).
I know this is an abstraction that doesn’t do justice to the work people are doing on working towards this model, but I think the complexities of AI are one of the things that lead certain people to the knee-jerk reaction that all post-singularity AIs will want to exterminate the human race. (possessing a phobia because you don’t understand something etc etc...)