Agreed that a paperclip maximizer can “discover what is moral,” in the sense that you’re using it here. (Although there’s no reason to expect any particular PM to do so, no matter how intelligent it is.)
Can you clarify why this sort of discovery is in any way interesting, useful, or worth talking about?
...morality is an objective feature of the universe...
Fascinating. I still don’t understand in what sense this could be true, except maybe the way I tried to interpret EY here and here. But those comments simply got downvoted without any explanation or attempt to correct me, therefore I can’t draw any particular conclusion from those downvotes.
You could argue that morality (what is right?) is human and other species will agree that from a human perspective what is moral is right is right is moral. Although I would agree, I don’t understand how such a confusing use of terms is helpful.
Morality is just a specific set of terminal values. It’s an objective feature of the universe because… humans have those terminal values. You can look inside the heads of humans and discover them. “Should,” “right,” and “moral,” in EY’s terms, are just being used as a rigid designators to refer to those specific values.
I’m not sure I understand the distinction between “right” and “moral” in your comment.
Agreed that a paperclip maximizer can “discover what is moral,” in the sense that you’re using it here. (Although there’s no reason to expect any particular PM to do so, no matter how intelligent it is.)
Can you clarify why this sort of discovery is in any way interesting, useful, or worth talking about?
It drives home the point that morality is an objective feature of the universe that doesn’t depend on the agent asking “what should I do?”
Huh. I don’t see how it drives home that point at all. But OK, at least I know what your intention is… thank you for clarifying that.
Fascinating. I still don’t understand in what sense this could be true, except maybe the way I tried to interpret EY here and here. But those comments simply got downvoted without any explanation or attempt to correct me, therefore I can’t draw any particular conclusion from those downvotes.
You could argue that morality (what is right?) is human and other species will agree that from a human perspective what is moral is right is right is moral. Although I would agree, I don’t understand how such a confusing use of terms is helpful.
Morality is just a specific set of terminal values. It’s an objective feature of the universe because… humans have those terminal values. You can look inside the heads of humans and discover them. “Should,” “right,” and “moral,” in EY’s terms, are just being used as a rigid designators to refer to those specific values.
I’m not sure I understand the distinction between “right” and “moral” in your comment.