there not being any universally compelling arguments.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.