I don’t think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.
Now if you’re a moral realist and you try to start writing an AI you’re going to quickly see that you have a problem.
/#Initiates AI morality /#
action_array.sort(morality)
do action_array[0]
Doesn’t work. So you have to start defining “morality” any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn’t rapidly lead to disastrous consequences. You end up with the only plausible option looking like : “Examine what humans would want if they were rational and had all the information you have”. Seems to me that that is the moment you should just become a moral subjectivist -- maybe of the ideal observer theory variety.
Now you might just believe the orthogonality thesis because you are a moral realist who doesn’t believe in motivational internalism—they’re lots of ways to get there. But you can’t be an anti-realist and ever even come close to making such a mistake.
So you have to start defining “morality” any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn’t rapidly lead to disastrous consequences.
No, because it’s possible that there genuinely is a possible total ordering, but that nobody knows how to figure out what it is. “No human always knows what’s right” is not an argument against moral realism, any more than “No human knows everything about God” is an argument against theism.
I would expect, due to the nature of intelligence, that they’d be likely to end up valuing certain things, like power or wireheading. I don’t see why this would require that those values are in some way true.
The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
Doesn’t work. So you have to start defining “morality” any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn’t rapidly lead to disastrous consequences.
What consequences? That claim is badly in need of support.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate. I’m not really interested in rehashing that stuff with someone who isn’t already familiar with it.
The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
The question was “is the orthogonality thesis at odds with moral realism?”. I answered: “maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis—it’s actually a trivial implication of moral anti-realism.”
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate.
Which is accepted by virtually no domain expert in AI.
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position
I don’t think you have to be a moral anti-realist to believe the orthogonality thesis but you certainly have to be a moral realist to not believe it.
Now if you’re a moral realist and you try to start writing an AI you’re going to quickly see that you have a problem.
Doesn’t work. So you have to start defining “morality” any you figure out pretty quickly that no one has the least idea how to do that in a way that doesn’t rapidly lead to disastrous consequences. You end up with the only plausible option looking like : “Examine what humans would want if they were rational and had all the information you have”. Seems to me that that is the moment you should just become a moral subjectivist -- maybe of the ideal observer theory variety.
Now you might just believe the orthogonality thesis because you are a moral realist who doesn’t believe in motivational internalism—they’re lots of ways to get there. But you can’t be an anti-realist and ever even come close to making such a mistake.
No, because it’s possible that there genuinely is a possible total ordering, but that nobody knows how to figure out what it is. “No human always knows what’s right” is not an argument against moral realism, any more than “No human knows everything about God” is an argument against theism.
(I’m not a moral realist or theist)
I wasn’t making an argument against moral realism in the sentence you quoted.
I would expect, due to the nature of intelligence, that they’d be likely to end up valuing certain things, like power or wireheading. I don’t see why this would require that those values are in some way true.
The (possible extreme) difficulty of figuring out objective morality in a way that can be coded into an AI is not an argument against moral realism. If it were, we would have to disbelieve in language, consciousness and other difficult issues.
What consequences? That claim is badly in need of support.
No, it isn’t. It’s Less Wrong/MIRI boilerplate. I’m not really interested in rehashing that stuff with someone who isn’t already familiar with it.
The question was “is the orthogonality thesis at odds with moral realism?”. I answered: “maybe not, but moral anti-realism is certainly closely aligned with the orthogonality thesis—it’s actually a trivial implication of moral anti-realism.”
If you are concerned that people aren’t taking the orthogonality thesis seriously enough then emphasizing that there is as much evidence for moral realism as there is for God is a pretty good way to frame the issue.
Which is accepted by virtually no domain expert in AI.
It could be persuasive to a selected audience—of people with a science background who don’t know that much moral philosophy. If you do know much moral philosophy, you would know that there isn’t that much evidence for any position, and that there is no unproblematic default position