I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false)
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn’t moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people’s judgements clearly aren’t as good in that sense. But if you and I disagree about what’s moral, we may be just confused about words because you’re thinking of Sophronius!moral and I’m thinking of DanArmak!moral and these are similar but different things.
Everything you say is correct, except that I’m not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I’ve always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of “subjective”, which is “an attribute of a person, such that you don’t know that attribute’s value without knowing who the person is”. Or, more simply, a “subjective X” is a function from a person to X.
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
I believe that’s where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set—we can’t even agree on which of us should be killed! - that something useful and coherent could be “extrapolated” from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don’t believe it myself.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health. It does not preclude a science of morality.
However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated
Unfortunate, but understandable as that’s a lot harder to prove than the philosophical argument.
I can definitely imagine that we find out that humans terminally value other’s utility functions such that U(Sophronius) = X(U(DanArmak) + …, and U(danArmak) = U(otherguy) + … , and so everyone values everybody else’s utility in a roundabout way which could yield something like a human utility function. But I don’t know if it’s actually true in practice.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with:
I don’t think these two are really different. An “opinion”, a “belief”, and a “preference” are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don’t clearly differentiate between them, they don’t cut reality at its joints.
Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health.
How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? “Holding an opinion” seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance (“my sports team is the best, but that’s just my opinion”).
Maybe you use the phrase “just an opinion” to signal something people don’t actually care about, or don’t really believe in, just say but never act on, change far too easily, etc.. That’s true of a lot of opinions that people hold. But it’s also true of a lot of morals.
It does not preclude a science of morality.
You can always make a science of other people’s subjective attributes. You can make a science of people’s “just an” opinions, and it’s been done—about as well as making a science of morality.
I’m still not certain if I managed to get what I think is the issue across. To clarify, here’s an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences. Sophronius: Sure, but it’s objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe. Philosopher: But that does not get us a universal system of morality, because preferences still differ. Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right? Philosopher: No, we cannot criticize other cultures, because morality is subjective.
The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice (“morality is subjective”) and it not being possible to make moral claims in principle (“Morality is subjective”). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).
I think it’s actually very similar to the error people make when it comes to discussing “free will”. Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can’t punish criminals because they have no free will (in the sense of their preferences affecting their actions).
I understand now what you’re referring to. I believe this is formally called normative moral relativism, which holds that:
because nobody is right or wrong, we ought to tolerate the behavior of others even when we disagree about the morality of it.
That is a minority opinion, though, and all of (non-normative) moral relativism shouldn’t be implicated.
Here’s what I would reply in the place of your philosopher:
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: It’s considered wrong by many people, like the two of us. And it’s considered right by some other people (or they wouldn’t regularly do it in some countries). So while we should act to stop it, it’s incorrect to call it simply wrong (because nothing is). But because most people don’t make such precise distinctions of speech, they might misunderstand us to mean that “it’s not really wrong”, a political/social disagreement; and since we don’t want that, we should probably use other, more technical terms instead of abusing the bare word “wrong”.
Recognizing that there is no objective moral good is instrumentally important. It’s akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It’s good to remember that people, in general, don’t share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves.
Here’s my attempt to steelman the normative moral relativist position:
We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values—like all values—are in a minority. All of our own greatest moral values—equality, liberty, fraternity—come with a historical story of overthrowing different past values, of which we are proud. Our “western” values today are widespread across the world in large degree because they were spread by force.
When we find ourselves in conflict with others—e.g. because they throw acid in their wives’ faces—we should be appropriately humble and cautious. Because we are also in conflict with our own past and our own future. Because we are unwilling to freeze our own society’s values for eternity and stop all future change (“progress”), but neither can we predict what would constitute progress, or else we would hold those better values already. And because we didn’t choose our existing values, they are often in mutual conflict, and they suffer evolutionary memetic pressure that we may not endorse on the meta level (i.e. a value that says it should be spread by the sword might be more memetically successful than the pacifistic version of the same value).
I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn’t moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people’s judgements clearly aren’t as good in that sense. But if you and I disagree about what’s moral, we may be just confused about words because you’re thinking of Sophronius!moral and I’m thinking of DanArmak!moral and these are similar but different things.
Is that what you meant?
Everything you say is correct, except that I’m not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I’ve always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of “subjective”, which is “an attribute of a person, such that you don’t know that attribute’s value without knowing who the person is”. Or, more simply, a “subjective X” is a function from a person to X.
I believe that’s where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set—we can’t even agree on which of us should be killed! - that something useful and coherent could be “extrapolated” from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don’t believe it myself.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health. It does not preclude a science of morality.
Unfortunate, but understandable as that’s a lot harder to prove than the philosophical argument.
I can definitely imagine that we find out that humans terminally value other’s utility functions such that U(Sophronius) = X(U(DanArmak) + …, and U(danArmak) = U(otherguy) + … , and so everyone values everybody else’s utility in a roundabout way which could yield something like a human utility function. But I don’t know if it’s actually true in practice.
I don’t think these two are really different. An “opinion”, a “belief”, and a “preference” are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don’t clearly differentiate between them, they don’t cut reality at its joints.
How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? “Holding an opinion” seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance (“my sports team is the best, but that’s just my opinion”).
Maybe you use the phrase “just an opinion” to signal something people don’t actually care about, or don’t really believe in, just say but never act on, change far too easily, etc.. That’s true of a lot of opinions that people hold. But it’s also true of a lot of morals.
You can always make a science of other people’s subjective attributes. You can make a science of people’s “just an” opinions, and it’s been done—about as well as making a science of morality.
I’m still not certain if I managed to get what I think is the issue across. To clarify, here’s an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it’s objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: No, we cannot criticize other cultures, because morality is subjective.
The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice (“morality is subjective”) and it not being possible to make moral claims in principle (“Morality is subjective”). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).
I think it’s actually very similar to the error people make when it comes to discussing “free will”. Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can’t punish criminals because they have no free will (in the sense of their preferences affecting their actions).
I understand now what you’re referring to. I believe this is formally called normative moral relativism, which holds that:
That is a minority opinion, though, and all of (non-normative) moral relativism shouldn’t be implicated.
Here’s what I would reply in the place of your philosopher:
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: It’s considered wrong by many people, like the two of us. And it’s considered right by some other people (or they wouldn’t regularly do it in some countries). So while we should act to stop it, it’s incorrect to call it simply wrong (because nothing is). But because most people don’t make such precise distinctions of speech, they might misunderstand us to mean that “it’s not really wrong”, a political/social disagreement; and since we don’t want that, we should probably use other, more technical terms instead of abusing the bare word “wrong”.
Recognizing that there is no objective moral good is instrumentally important. It’s akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It’s good to remember that people, in general, don’t share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves.
Here’s my attempt to steelman the normative moral relativist position:
We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values—like all values—are in a minority. All of our own greatest moral values—equality, liberty, fraternity—come with a historical story of overthrowing different past values, of which we are proud. Our “western” values today are widespread across the world in large degree because they were spread by force.
When we find ourselves in conflict with others—e.g. because they throw acid in their wives’ faces—we should be appropriately humble and cautious. Because we are also in conflict with our own past and our own future. Because we are unwilling to freeze our own society’s values for eternity and stop all future change (“progress”), but neither can we predict what would constitute progress, or else we would hold those better values already. And because we didn’t choose our existing values, they are often in mutual conflict, and they suffer evolutionary memetic pressure that we may not endorse on the meta level (i.e. a value that says it should be spread by the sword might be more memetically successful than the pacifistic version of the same value).