I agree that the AI would not, merely by virtue of being smart enough to understand what humans really had in mind, start pursuing that. But fortunately, this is not my argument—I am not Steven Pinker.
That’s not the entirety of the problem—the AI wouldn’t start pursuing objective morality merely by the virtue of knowing what it is.
I agree that the AI would not, merely by virtue of being smart enough to understand what humans really had in mind, start pursuing that. But fortunately, this is not my argument—I am not Steven Pinker.
That’s not the entirety of the problem—the AI wouldn’t start pursuing objective morality merely by the virtue of knowing what it is.