Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.