This is Yudkowsky’s Hidden Complexity of Wishes problem from the human perspective. The concept of “caring” is rooted so deeply (in our flesh, I insist) that we cannot express it. Getting across the idea to AI that you care about your mother is not the same as asking for an outcome. This is why the problem is so hard. How would you convince the AI, in your first example, that your care was real? Or in your #2, that your wish was different from what it delivered? And how do you tell, you ask? By being disappointed in the result! (For instance in Yudkowsky’s example, when the AI delivers Mom out of the burning building as you requested, but in pieces.)
My point is that value is not a matter of cognition of the brain, but caring from the heart. When AI calls your insistence that it didn’t deliver what you wanted “prejudice”, I don’t think you’d be happy with the above defense.
What I wrote wasn’t intended as a defense of anything; it was an attempt to understand what you were saying. Since you completely ignored the questions I asked (which is of course your prerogative), I am none the wiser.
I think you may have misunderstood my conjecture about prejudice; if an AI professes to “care” but doesn’t in fact act in ways we recognize as caring, and if we conclude that actually it doesn’t care in the sense we meant, that’s not prejudice. (But it is looking at “outcomes”, which you disdained before.)
This is Yudkowsky’s Hidden Complexity of Wishes problem from the human perspective. The concept of “caring” is rooted so deeply (in our flesh, I insist) that we cannot express it. Getting across the idea to AI that you care about your mother is not the same as asking for an outcome. This is why the problem is so hard. How would you convince the AI, in your first example, that your care was real? Or in your #2, that your wish was different from what it delivered? And how do you tell, you ask? By being disappointed in the result! (For instance in Yudkowsky’s example, when the AI delivers Mom out of the burning building as you requested, but in pieces.)
My point is that value is not a matter of cognition of the brain, but caring from the heart. When AI calls your insistence that it didn’t deliver what you wanted “prejudice”, I don’t think you’d be happy with the above defense.
[Ref: http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/]
What I wrote wasn’t intended as a defense of anything; it was an attempt to understand what you were saying. Since you completely ignored the questions I asked (which is of course your prerogative), I am none the wiser.
I think you may have misunderstood my conjecture about prejudice; if an AI professes to “care” but doesn’t in fact act in ways we recognize as caring, and if we conclude that actually it doesn’t care in the sense we meant, that’s not prejudice. (But it is looking at “outcomes”, which you disdained before.)