The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
Doesn’t work. So you have to start defining “morality” any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn’t rapidly lead to disastrous consequences.
What consequences? That claim is badly in need of support.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate. I’m not really interested in rehashing that stuff with someone who isn’t already familiar with it.
The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
The question was “is the orthogonality thesis at odds with moral realism?”. I answered: “maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis—it’s actually a trivial implication of moral anti-realism.”
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate.
Which is accepted by virtually no domain expert in AI.
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position
The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate. I’m not really interested in rehashing that stuff with someone who isn’t already familiar with it.
The question was “is the orthogonality thesis at odds with moral realism?”. I answered: “maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis—it’s actually a trivial implication of moral anti-realism.”
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
Which is accepted by virtually no domain expert in AI.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position