Jonathan Haidt writes in The Righteous Mind: Why Good People Are Divided by Politics and Religion of his finding that while the upper classes in both Brazil and USA were likely to find things like “not wearing a uniform to school” to be violations of social convention, lower classes in both countries were likely to find them violations of absolute moral codes.
Does he? The data in the source disagree (tables on 619-620). I haven’t read all the text of the source, but it gives the uniform as the prototypical example of a custom and seems to say that it did work out that way. 40% of low SES adults in Recife (but not Porto Alegre) did claim it universal, but that’s less than on any of the interesting examples. (Children everywhere showed less class-sensitivity than adults.)
Just to be clear, the description of the results of the experiment is correct, just mixing up the control example with the experimental example.
Thanks, I edited the sentence to be clearer on that: ”...that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes.”
Years ago, I had a “spiritual person” telling me about how god could help me if I prayed to him. Wishing to make a point by metaphor, I told him “it seems to me that god is just santa clause for grown-ups.” “Yes,” he responded, “santa clause gives kids what they want, god gives you what you need.”
If only clever repartee established truth, then Stephen Colbert would be the last president we would ever need.
If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?
If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?
It isn’t going to think the CEV is an absolute morality—it’ll just keep doing what it is programmed to do because that is what it does. If the programming is correct it’ll keep implementing CEV. If it was incorrect then we’ll probably all die.
The relevance to ‘absolute morality’ here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.
Does he? The data in the source disagree (tables on 619-620). I haven’t read all the text of the source, but it gives the uniform as the prototypical example of a custom and seems to say that it did work out that way. 40% of low SES adults in Recife (but not Porto Alegre) did claim it universal, but that’s less than on any of the interesting examples. (Children everywhere showed less class-sensitivity than adults.)
Just to be clear, the description of the results of the experiment is correct, just mixing up the control example with the experimental example.
Thanks, I edited the sentence to be clearer on that: ”...that while the upper classes in both Brazil and USA were likely to find violations of harmless taboos to be violations of social convention, lower classes in both countries were more likely to find them violations of absolute moral codes.”
That’s a fun result.
Years ago, I had a “spiritual person” telling me about how god could help me if I prayed to him. Wishing to make a point by metaphor, I told him “it seems to me that god is just santa clause for grown-ups.” “Yes,” he responded, “santa clause gives kids what they want, god gives you what you need.”
If only clever repartee established truth, then Stephen Colbert would be the last president we would ever need.
If the smarter you get, the more things you think are social convention and the fewer you think are absolute morality, then what is our self-improving AI going to eventually think about the CEV we coded in back when he was but an egg?
It isn’t going to think the CEV is an absolute morality—it’ll just keep doing what it is programmed to do because that is what it does. If the programming is correct it’ll keep implementing CEV. If it was incorrect then we’ll probably all die.
The relevance to ‘absolute morality’ here is that if the programmers happened to believe there was an absolute morality and tried to program the AI to follow that then they would fail, potentially catastrophically.