I think the point is that people try to point to things like God’s will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God’s decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.
Then it isn’t an argument that moral realism is incoherent, and it isn’t an argument that moral realism in general is false either..It’s an argument against divine command theory. It.might be successful as such , but it’s a more modest target. (Also, not original...It would be Eurythro)
This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it.
I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.
If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn’t care what the “right” thing is. Humans also probably don’t care about the cosmic tablet either. That sort of thing isn’t what “morality” is references. The argue is more of a trick to get people recognize that than a formal argument.
there not being any universally compelling arguments.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.
I think the point is that people try to point to things like God’s will in order to appear like they have a source of authority. Eliezer is trying to lead them to conclude that any such tablet being authoritative just by nature is absurd and only seems right because they expect the tablet to agree with them. Another method is asking why the tablet says what it does. Asking if God’s decrees are arbitrary or if there is a good reason, ask why not just follow those reasons.
Then it isn’t an argument that moral realism is incoherent, and it isn’t an argument that moral realism in general is false either..It’s an argument against divine command theory. It.might be successful as such , but it’s a more modest target. (Also, not original...It would be Eurythro)
This is not addressing my criticism. He is saying that if objective morality existed and you dont like it, you should ignore it. I am not saying that objective morality exists or not, but addressing the logic in hypothetical world where it does exist.
If I remember right, it was in the context of there not being any universally compelling arguments. A paperclip maximizer would just ignore the tablet. It doesn’t care what the “right” thing is. Humans also probably don’t care about the cosmic tablet either. That sort of thing isn’t what “morality” is references. The argue is more of a trick to get people recognize that than a formal argument.
That was always a confused argument. A universally compelling argument is supposed to compell any epistemically rational agent. The fact that it doesn’t compel a paperclipper, or a rock is irrelevant.
Eliezer used “universally compelling argument” to illustrate a hypothetical argument that could persuade anything, even a paper clip maximiser. He didn’t use it to refer to your definition of the word.
You can say that the fact it doesn’t persuade a paper clip maximiser is irrelevant, but that has no bearing on the definition of the word as commonly used in LessWrong.
...which in turn has no bearing on the wider philosophical issue. Moral realism only requires moral facts to exist , not to be motivating. There’s a valid argument that unmotivating facts can’t align an AI , but it doesn’t need such an elaborate defense.