The controversies between human beings about which specific sets of values are moral, at every scale large and small, are legendary beyond cliche.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
You may or may not find that convincing (you’ll get to the arguments regarding that if you’re reading the sequences), but assuming that is true, then “morality is a specific set of values” is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it’s about instrumental values).
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
It is? That’s a worry. Consider this a +1 for “That thesis is totally false and only serves signalling purposes!”
I… think it is. Maybe I’ve gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
This whole area isn’t covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn’t appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.
Note that I’m not denying that some people present (or usually just assume) the thesis you present. I’m just glad that there are usually others who argue against it!
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.
Maybe it’s true if you also specify “if they were fully capable of modifying their own moral intuitions.” I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don’t know whether it’s true. It’s what I imagine Eliezer means when he talks about “humanity growing up together”.
This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It’s been an integral part of every culture’s moral evolution, and something like it needs to be part of CEV if it’s going to actually converge.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning.
That’s not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can’t have them imply exactly the same preference.
Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won’t differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.
It is a common thesis here that most humans would ultimately have the same moral judgments if they were in full agreement about all factual questions and were better at reasoning. In other words, human brains have a common moral architecture, and disagreements are at the level of instrumental, rather than terminal, values and result from mistaken factual beliefs and reasoning errors.
You may or may not find that convincing (you’ll get to the arguments regarding that if you’re reading the sequences), but assuming that is true, then “morality is a specific set of values” is correct, though vague: more precisely, it is a very complicated set of terminal values, which, in this world, happens to be embedded solely in a species of minds who are not naturally very good at rationality, leading to massive disagreement about instrumental values (though most people do not notice that it’s about instrumental values).
It is? That’s a worry. Consider this a +1 for “That thesis is totally false and only serves signalling purposes!”
I… think it is. Maybe I’ve gotten something terribly wrong, but I got the impression that this is one of the points of the complexity of value and metaethics sequences, and I seem to recall that it’s the basis for expecting humanity’s extrapolated volition to actually cohere.
This whole area isn’t covered all that well (as Wei noted). I assumed that CEV would rely on solving an implicit cooperation problem between conflicting moral systems. It doesn’t appear at all unlikely to me that some people are intrinsically selfish to some degree and their extrapolated volitions would be quite different.
Note that I’m not denying that some people present (or usually just assume) the thesis you present. I’m just glad that there are usually others who argue against it!
That’s exactly what I took CEV to entail.
Now this is a startling claim.
Be more specific!
Maybe it’s true if you also specify “if they were fully capable of modifying their own moral intuitions.” I have an intuition (an unexamined belief? a hope? a sci-fi trope?) that humanity as a whole will continue to evolve morally and roughly converge on a morality that resembles current first-world liberal values more than, say, Old Testament values. That is, it would converge, in the limit of global prosperity and peace and dialogue, and assuming no singularity occurs and the average lifespan stays constant. You can call this naive if you want to; I don’t know whether it’s true. It’s what I imagine Eliezer means when he talks about “humanity growing up together”.
This growing-up process currently involves raising children, which can be viewed as a crude way of rewriting your personality from scratch, and excising vestiges of values you no longer endorse. It’s been an integral part of every culture’s moral evolution, and something like it needs to be part of CEV if it’s going to actually converge.
That’s not plausible. That would be some sort of objective morality, and there is no such thing. Humans have brains, and brains are complicated. You can’t have them imply exactly the same preference.
Now, the non-crazy version of what you suggest is that preferences of most people are roughly similar, that they won’t differ substantially in major aspects. But when you focus on detail, everyone is bound to want their own thing.