As before, I found the question on metaethics (31) to be a tossup because I agree with several of the options given. I’d be interested in hearing from people who agree with some but not all of these answers:
Non-cognitivism: Moral statements don’t express propositions and can neither be true nor false. “Murder is wrong” means something like “Boo murder!”.
Error theory: Moral statements have a truth-value, but attempt to describe features of the world that don’t exist. “Murder is wrong” and “Murder is right” are both false statements because moral rightness and wrongness aren’t features that exist.
Subjectivism: Some moral statements are true, but not universally, and the truth of a moral statement is determined by non-universal opinions or prescriptions, and there is no non-attitudinal determinant of rightness and wrongness. “Murder is wrong” means something like “My culture has judged murder to be wrong” or “I’ve judged murder to be wrong”.
I’m a subjectivist: I understand that when someone says “murder is wrong”, she’s expressing a personal judgement—others can judge differently. But I also know that most people are moral realists, so they wrongly think they are describing features of the world that don’t in fact exist; thus, I believe in error theory. And what does it mean to proclaim that something “is wrong”, other than to boo it, i.e. to call for people not to do it and to shun those who do? Thus, I also agree with non-cognitivism.
I don’t agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn’t exist, while non-cognitivism holds that moral statements only express emotional attitudes (“Yay for X!”) or commands (“Don’t X!”), which can neither be true nor false. The difference between error theory and subjectivism is that subjectivists believe that some moral statements are true, but that they are made true by something mind-dependent (but what counts as mind-dependent turns out to be quite complicated).
And what does it mean to proclaim that something “is wrong”, other than to boo it, i.e. to call for people not to do it and to shun those who do?
The intended difference is something like —
“I disapprove of murder.” This is a proposition that can be true or false. (Perhaps I actually approve of murder, in which case it is false.)
“Boo, murder!” This is not a proposition. It is an act of disapproval. If I say this, I am not claiming that I disapprove — I am disapproving.
It’s like the difference between asserting, “I appreciate that musical performance,” and actually giving a standing ovation. (It’s true that people sometimes state propositions to express approval or disapproval, but we also use non-proposition expressions as well.)
I don’t understand how this difference leads to different (and disjoint / disagreeing) philosophical positions on what it means for people to say that “murder is wrong”.
If someone says they disapprove of murder, they could be wrong or lying, or they could actually disapprove a little but say they disapprove lots, or vice versa. And if they actually boo murder, that’s a signal they really disapprove of it, enough to invest energy in booing. But aside from signalling and credibility and how much they care about it, isn’t their claimed position the same?
Are you saying non-cognitivists claim people who say “murder is wrong” never actually engage in false signalling, and we should take all statements of “murder is wrong” to be equivalent to actual booing? That sounds trivially false; surely that’s not the intent of non-cognitivism.
If moral claims are not propositions, then propositional logic doesn’t work on them — notably, this means that a moral claim could never be the conclusion of a logical proof.
Which would stop us from deriving new moral claims from existing ones. I understand now. Thanks!
So, if I understand correctly now, non-cognitivists say that human morals aren’t constrained by the rules of logic. People don’t care much about contradictions between their moral beliefs, they don’t try to reduce them to consistent and independent axioms, they don’t try to find new rules implied by old ones. They just cheer and boo certain things.
It’s worth noting that there are non-cognitivist positions other than emotivism (the “boo, murder!” position). For instance, there’s the prescriptivist position — that moral claims are imperative sentences or commands. This is also non-cognitivist, because commands are not propositions and don’t have truth-values. But it’s not emotivist, since we can do a kind of logic on commands, even though it’s not the same as the logic on propositions.
(“Boo, murder!” does not logically entail “Boo, murdering John!” … but the command “Don’t murder people!” conjoined with the proposition “John is a person.” does seem to logically entail the command “Don’t murder John!” So conjunction of commands and propositions works. But disjunction on commands doesn’t work.)
I was similarly torn between answers and i’m glad you brought this up. I think substantive realism is the most useful perspective here, but i clicked constructivism in an attempt to honor the spirit of the question, even if it was kindof a technicality.
For me, the hard-to-express part is that the universe cares nothing about human ethics, but it’s fine for us (humans) to view our shared utility function as objective.
I treat a moral sense similar to how I’d treat a “yummy” sense. Your nervous system does an evaluation. Sometimes it evaluates as yummy, sometimes as moral.
But the moral sense operates with a different domain and range than yummy, in that it has preferences between behaviors, and preferences between preferences about behaviors,… and implies reward and punishment up the level of abstraction in that scale of preferences.
I opted for Subjectivism as the best match.
Error Theory just seems rather dumb. I think I get the sense in which you mean it, which seems like a valid observation about the error of objectivists, but I think you’re mistaking the definition here. It said ” moral rightness and wrongness aren’t features that exist”, but they do, regardless of confusion that moral objectivists may have about them. They exist to you, right?
Non-cognitivism seems like a straw man moral subjectivism. There is a lot more to it than just “boo”. There is structure to the behavioral preferences and the resulting behavioral responses.
I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false)
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn’t moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people’s judgements clearly aren’t as good in that sense. But if you and I disagree about what’s moral, we may be just confused about words because you’re thinking of Sophronius!moral and I’m thinking of DanArmak!moral and these are similar but different things.
Everything you say is correct, except that I’m not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I’ve always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of “subjective”, which is “an attribute of a person, such that you don’t know that attribute’s value without knowing who the person is”. Or, more simply, a “subjective X” is a function from a person to X.
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
I believe that’s where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set—we can’t even agree on which of us should be killed! - that something useful and coherent could be “extrapolated” from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don’t believe it myself.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health. It does not preclude a science of morality.
However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated
Unfortunate, but understandable as that’s a lot harder to prove than the philosophical argument.
I can definitely imagine that we find out that humans terminally value other’s utility functions such that U(Sophronius) = X(U(DanArmak) + …, and U(danArmak) = U(otherguy) + … , and so everyone values everybody else’s utility in a roundabout way which could yield something like a human utility function. But I don’t know if it’s actually true in practice.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with:
I don’t think these two are really different. An “opinion”, a “belief”, and a “preference” are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don’t clearly differentiate between them, they don’t cut reality at its joints.
Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health.
How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? “Holding an opinion” seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance (“my sports team is the best, but that’s just my opinion”).
Maybe you use the phrase “just an opinion” to signal something people don’t actually care about, or don’t really believe in, just say but never act on, change far too easily, etc.. That’s true of a lot of opinions that people hold. But it’s also true of a lot of morals.
It does not preclude a science of morality.
You can always make a science of other people’s subjective attributes. You can make a science of people’s “just an” opinions, and it’s been done—about as well as making a science of morality.
I’m still not certain if I managed to get what I think is the issue across. To clarify, here’s an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences. Sophronius: Sure, but it’s objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe. Philosopher: But that does not get us a universal system of morality, because preferences still differ. Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right? Philosopher: No, we cannot criticize other cultures, because morality is subjective.
The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice (“morality is subjective”) and it not being possible to make moral claims in principle (“Morality is subjective”). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).
I think it’s actually very similar to the error people make when it comes to discussing “free will”. Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can’t punish criminals because they have no free will (in the sense of their preferences affecting their actions).
I understand now what you’re referring to. I believe this is formally called normative moral relativism, which holds that:
because nobody is right or wrong, we ought to tolerate the behavior of others even when we disagree about the morality of it.
That is a minority opinion, though, and all of (non-normative) moral relativism shouldn’t be implicated.
Here’s what I would reply in the place of your philosopher:
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: It’s considered wrong by many people, like the two of us. And it’s considered right by some other people (or they wouldn’t regularly do it in some countries). So while we should act to stop it, it’s incorrect to call it simply wrong (because nothing is). But because most people don’t make such precise distinctions of speech, they might misunderstand us to mean that “it’s not really wrong”, a political/social disagreement; and since we don’t want that, we should probably use other, more technical terms instead of abusing the bare word “wrong”.
Recognizing that there is no objective moral good is instrumentally important. It’s akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It’s good to remember that people, in general, don’t share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves.
Here’s my attempt to steelman the normative moral relativist position:
We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values—like all values—are in a minority. All of our own greatest moral values—equality, liberty, fraternity—come with a historical story of overthrowing different past values, of which we are proud. Our “western” values today are widespread across the world in large degree because they were spread by force.
When we find ourselves in conflict with others—e.g. because they throw acid in their wives’ faces—we should be appropriately humble and cautious. Because we are also in conflict with our own past and our own future. Because we are unwilling to freeze our own society’s values for eternity and stop all future change (“progress”), but neither can we predict what would constitute progress, or else we would hold those better values already. And because we didn’t choose our existing values, they are often in mutual conflict, and they suffer evolutionary memetic pressure that we may not endorse on the meta level (i.e. a value that says it should be spread by the sword might be more memetically successful than the pacifistic version of the same value).
As before, I found the question on metaethics (31) to be a tossup because I agree with several of the options given. I’d be interested in hearing from people who agree with some but not all of these answers:
I’m a subjectivist: I understand that when someone says “murder is wrong”, she’s expressing a personal judgement—others can judge differently. But I also know that most people are moral realists, so they wrongly think they are describing features of the world that don’t in fact exist; thus, I believe in error theory. And what does it mean to proclaim that something “is wrong”, other than to boo it, i.e. to call for people not to do it and to shun those who do? Thus, I also agree with non-cognitivism.
I don’t agree with any of these options, but I proposed the question back in 2014, so I hope I can shed some light. The difference between non-cognitivism and error theory is that the error theory supposes that people attempt to describe some feature of the world when they make moral statements, and that feature doesn’t exist, while non-cognitivism holds that moral statements only express emotional attitudes (“Yay for X!”) or commands (“Don’t X!”), which can neither be true nor false. The difference between error theory and subjectivism is that subjectivists believe that some moral statements are true, but that they are made true by something mind-dependent (but what counts as mind-dependent turns out to be quite complicated).
The intended difference is something like —
“I disapprove of murder.” This is a proposition that can be true or false. (Perhaps I actually approve of murder, in which case it is false.)
“Boo, murder!” This is not a proposition. It is an act of disapproval. If I say this, I am not claiming that I disapprove — I am disapproving.
It’s like the difference between asserting, “I appreciate that musical performance,” and actually giving a standing ovation. (It’s true that people sometimes state propositions to express approval or disapproval, but we also use non-proposition expressions as well.)
I don’t understand how this difference leads to different (and disjoint / disagreeing) philosophical positions on what it means for people to say that “murder is wrong”.
If someone says they disapprove of murder, they could be wrong or lying, or they could actually disapprove a little but say they disapprove lots, or vice versa. And if they actually boo murder, that’s a signal they really disapprove of it, enough to invest energy in booing. But aside from signalling and credibility and how much they care about it, isn’t their claimed position the same?
Are you saying non-cognitivists claim people who say “murder is wrong” never actually engage in false signalling, and we should take all statements of “murder is wrong” to be equivalent to actual booing? That sounds trivially false; surely that’s not the intent of non-cognitivism.
If moral claims are not propositions, then propositional logic doesn’t work on them — notably, this means that a moral claim could never be the conclusion of a logical proof.
Which would stop us from deriving new moral claims from existing ones. I understand now. Thanks!
So, if I understand correctly now, non-cognitivists say that human morals aren’t constrained by the rules of logic. People don’t care much about contradictions between their moral beliefs, they don’t try to reduce them to consistent and independent axioms, they don’t try to find new rules implied by old ones. They just cheer and boo certain things.
It’s worth noting that there are non-cognitivist positions other than emotivism (the “boo, murder!” position). For instance, there’s the prescriptivist position — that moral claims are imperative sentences or commands. This is also non-cognitivist, because commands are not propositions and don’t have truth-values. But it’s not emotivist, since we can do a kind of logic on commands, even though it’s not the same as the logic on propositions.
https://en.wikipedia.org/wiki/Non-cognitivism
https://en.wikipedia.org/wiki/Imperative_logic
(“Boo, murder!” does not logically entail “Boo, murdering John!” … but the command “Don’t murder people!” conjoined with the proposition “John is a person.” does seem to logically entail the command “Don’t murder John!” So conjunction of commands and propositions works. But disjunction on commands doesn’t work.)
I was similarly torn between answers and i’m glad you brought this up. I think substantive realism is the most useful perspective here, but i clicked constructivism in an attempt to honor the spirit of the question, even if it was kindof a technicality.
For me, the hard-to-express part is that the universe cares nothing about human ethics, but it’s fine for us (humans) to view our shared utility function as objective.
I treat a moral sense similar to how I’d treat a “yummy” sense. Your nervous system does an evaluation. Sometimes it evaluates as yummy, sometimes as moral.
But the moral sense operates with a different domain and range than yummy, in that it has preferences between behaviors, and preferences between preferences about behaviors,… and implies reward and punishment up the level of abstraction in that scale of preferences.
I opted for Subjectivism as the best match.
Error Theory just seems rather dumb. I think I get the sense in which you mean it, which seems like a valid observation about the error of objectivists, but I think you’re mistaking the definition here. It said ” moral rightness and wrongness aren’t features that exist”, but they do, regardless of confusion that moral objectivists may have about them. They exist to you, right?
Non-cognitivism seems like a straw man moral subjectivism. There is a lot more to it than just “boo”. There is structure to the behavioral preferences and the resulting behavioral responses.
You are not the first to draw this parallel.
[EDITED to add:] Really fun paper, by the way.
I had a similar issue: None of the options seems right to me. Subjectivism seems to imply that one person’s judgment is no better than another’s (which is false), but constructivism seems to imply that ethics are purely a matter of convenience (also false). I voted the latter in the end, but am curious how others see this.
Subjectivism implies that morals are two-place concepts, just like preferences. Murder isn’t moral or immoral, it can only be Sophronius!moral or Sophronius!immoral. This means Sophronius is probably best equipped to judge what is Sophronius!moral, so other people’s judgements clearly aren’t as good in that sense. But if you and I disagree about what’s moral, we may be just confused about words because you’re thinking of Sophronius!moral and I’m thinking of DanArmak!moral and these are similar but different things.
Is that what you meant?
Everything you say is correct, except that I’m not sure Subjectivism is the right term to describe the meta-ethical philosophy Eliezer lays out. The wikipedia definition, which is the one I’ve always heard used, says that subjectivism holds that it is merely subjective opinion while realism states the opposite. If I take that literally, then moral realism would hold the correct answer, as everything regarding morality concerns empirical fact (As the article you link to tried to explain).
All this is disregarding the empirical question of to what extend our preferences actually overlap—and to what extend we value each other’s utility functions an sich. If the overlap/altruism is large enough, we could still end up with de facto objective morality, depending. Has Eliezer ever tried answering this? Would be interesting.
That makes no sense to me. How is it different from saying nothing at all is subjective? This seems to just ignore the definition of “subjective”, which is “an attribute of a person, such that you don’t know that attribute’s value without knowing who the person is”. Or, more simply, a “subjective X” is a function from a person to X.
I believe that’s where the whole CEV story comes into play. That is, Eliezer believes or believed that while today the shared preferences of all humans form a tiny, mostly useless set—we can’t even agree on which of us should be killed! - that something useful and coherent could be “extrapolated” from them. However, as far as I know, he never gave an actual argument for why such a thing could be extrapolated, or why all humans could agree on an extrapolation procedure, and I don’t believe it myself.
I am making a distinction here between subjectivity as you define it, and subjectivity as it is commonly used, i.e. “just a matter of opinion”. I think (though could be mistaken) that the test described subjectivism as it just being a matter of opinion, which I would not agree with: Morality depends on individual preferences, but only in the sense that healthcare depends on an individual’s health. It does not preclude a science of morality.
Unfortunate, but understandable as that’s a lot harder to prove than the philosophical argument.
I can definitely imagine that we find out that humans terminally value other’s utility functions such that U(Sophronius) = X(U(DanArmak) + …, and U(danArmak) = U(otherguy) + … , and so everyone values everybody else’s utility in a roundabout way which could yield something like a human utility function. But I don’t know if it’s actually true in practice.
I don’t think these two are really different. An “opinion”, a “belief”, and a “preference” are fundamentally similar; the word used indicates how attached the person is to that state, and how malleable it appears to be. There exist different underlying mechanisms, but these words don’t clearly differentiate between them, they don’t cut reality at its joints.
How is that different from beliefs or normative statements about the world, which depend on what opinions an individual holds? “Holding an opinion” seems to cash out in either believing something, or having a preference for something, or advocating some action, or making a statement of group allegiance (“my sports team is the best, but that’s just my opinion”).
Maybe you use the phrase “just an opinion” to signal something people don’t actually care about, or don’t really believe in, just say but never act on, change far too easily, etc.. That’s true of a lot of opinions that people hold. But it’s also true of a lot of morals.
You can always make a science of other people’s subjective attributes. You can make a science of people’s “just an” opinions, and it’s been done—about as well as making a science of morality.
I’m still not certain if I managed to get what I think is the issue across. To clarify, here’s an example of the failure mode I often encounter:
Philosopher: Morality is subjective, because it depends on individual preferences.
Sophronius: Sure, but it’s objective in the sense that those preferences are material facts of the world which can be analyzed objectively like any other part of the universe.
Philosopher: But that does not get us a universal system of morality, because preferences still differ.
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: No, we cannot criticize other cultures, because morality is subjective.
The mistake that the Philosopher makes here is conflating two different uses of subjectivity: He is switching between there being no universal system of morality in practice (“morality is subjective”) and it not being possible to make moral claims in principle (“Morality is subjective”). We agree that Morality is subjective in the sense that moral preferences differ, but that should not preclude you from making object-level moral judgements (which are objectively true or false).
I think it’s actually very similar to the error people make when it comes to discussing “free will”. Someone argues that there is no (magical non-deterministic) free will, and then concludes from that that we can’t punish criminals because they have no free will (in the sense of their preferences affecting their actions).
I understand now what you’re referring to. I believe this is formally called normative moral relativism, which holds that:
That is a minority opinion, though, and all of (non-normative) moral relativism shouldn’t be implicated.
Here’s what I would reply in the place of your philosopher:
Sophronius: But if someone in cambodia gets acid thrown in her face by her husband, that’s wrong, right?
Philosopher: It’s considered wrong by many people, like the two of us. And it’s considered right by some other people (or they wouldn’t regularly do it in some countries). So while we should act to stop it, it’s incorrect to call it simply wrong (because nothing is). But because most people don’t make such precise distinctions of speech, they might misunderstand us to mean that “it’s not really wrong”, a political/social disagreement; and since we don’t want that, we should probably use other, more technical terms instead of abusing the bare word “wrong”.
Recognizing that there is no objective moral good is instrumentally important. It’s akin to internalizing the orthogonality thesis (and rejecting, as I do, the premise of CEV). It’s good to remember that people, in general, don’t share most of your values and morals, and that a big reason much of the world does share them is because they were imposed on it by forceful colonialism. Which does not imply we should abandon these values ourselves.
Here’s my attempt to steelman the normative moral relativist position:
We should recognize our values genuinely differ from those of many other people. From a historical (and potential future) perspective, our values—like all values—are in a minority. All of our own greatest moral values—equality, liberty, fraternity—come with a historical story of overthrowing different past values, of which we are proud. Our “western” values today are widespread across the world in large degree because they were spread by force.
When we find ourselves in conflict with others—e.g. because they throw acid in their wives’ faces—we should be appropriately humble and cautious. Because we are also in conflict with our own past and our own future. Because we are unwilling to freeze our own society’s values for eternity and stop all future change (“progress”), but neither can we predict what would constitute progress, or else we would hold those better values already. And because we didn’t choose our existing values, they are often in mutual conflict, and they suffer evolutionary memetic pressure that we may not endorse on the meta level (i.e. a value that says it should be spread by the sword might be more memetically successful than the pacifistic version of the same value).