Sorry, I meant to use the two-place version; it wouldn’t be what’s right; what I meant is that the completely analogous concept of “that-AI-right” would consist simply of that utility function.
To the extent that you are still talking about EY’s views, I still don’t think that’s correct… I think he would reject the idea that “that-AI-right” is analogous to right, or that “right” is a 2-place predicate.
That said, given that this question has come up elsethread and I’m apparently in the minority, and given that I don’t understand what all this talk of right adds to the discussion in the first place, it becomes increasingly likely that I’ve just misunderstood something.
In any case, I suspect we all agree that the AI’s decisions are motivated by its simple utility function in a manner analogous to how human decisions are motivated by our (far more complex) utility function. What disagreement exists, if any, involves the talk of “right” that I’m happy to discard altogether.
Sorry, I meant to use the two-place version; it wouldn’t be what’s right; what I meant is that the completely analogous concept of “that-AI-right” would consist simply of that utility function.
To the extent that you are still talking about EY’s views, I still don’t think that’s correct… I think he would reject the idea that “that-AI-right” is analogous to right, or that “right” is a 2-place predicate.
That said, given that this question has come up elsethread and I’m apparently in the minority, and given that I don’t understand what all this talk of right adds to the discussion in the first place, it becomes increasingly likely that I’ve just misunderstood something.
In any case, I suspect we all agree that the AI’s decisions are motivated by its simple utility function in a manner analogous to how human decisions are motivated by our (far more complex) utility function. What disagreement exists, if any, involves the talk of “right” that I’m happy to discard altogether.