Yeah, I’m not 100% my caricature of a person actually exists or is worth addressing. They’re mostly modeled on Robert Nozick, who is dead and cannot be reached for comment on value learning. But I had most of these thoughts and the post was really easy to write, so I decided to post it. Oh well :)
The person I am hypothetically thinking about is not very systematic—on average, they would admit that they don’t know where morality comes from. But they feel like they learn about morality by interacting in some mysterious way with an external moral reality, and that an AI is going to be missing something important—maybe even be unable to do good—if they don’t do that too. (So 90% overlap with your description of strong moral essentialism.)
I think these people plausibly should be for value learning, but are going to be dissatisfied with it and feel like it sends the wrong philosophical message.
Yeah, I’m not 100% my caricature of a person actually exists or is worth addressing. They’re mostly modeled on Robert Nozick, who is dead and cannot be reached for comment on value learning. But I had most of these thoughts and the post was really easy to write, so I decided to post it. Oh well :)
The person I am hypothetically thinking about is not very systematic—on average, they would admit that they don’t know where morality comes from. But they feel like they learn about morality by interacting in some mysterious way with an external moral reality, and that an AI is going to be missing something important—maybe even be unable to do good—if they don’t do that too. (So 90% overlap with your description of strong moral essentialism.)
I think these people plausibly should be for value learning, but are going to be dissatisfied with it and feel like it sends the wrong philosophical message.
What does it mean for a reality to be moral?