This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it.
I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.
If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn’t care what the “right” thing is. Humans also probably don’t care about the cosmic tablet either. That sort of thing isn’t what “morality” is references. The argue is more of a trick to get people recognize that than a formal argument.
there not being any universally compelling arguments.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.
This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it. I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.
If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn’t care what the “right” thing is. Humans also probably don’t care about the cosmic tablet either. That sort of thing isn’t what “morality” is references. The argue is more of a trick to get people recognize that than a formal argument.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.