An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1.Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2.Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3.Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?)most Error Theorists, Quirrell
4.Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
Harry floundered for words and then decided to simply go with the obvious. “First of all, just because I want to hurt someone doesn’t mean it’s right—”
“What makes something right, if not your wanting it?”
“Ah,” Harry said, “preference utilitarianism.”
“Pardon me?” said Professor Quirrell.
“It’s the ethical theory that the good is what satisfies the preferences of the most people—”
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
While there are some people who argue for accepting the repugnant conclusion (Tännsjö, 2004), most people would probably prefer the smaller but happier civilization – at least under some circumstances. One explanation for this preference might lie in intuition one discussed above, “Making people happy rather than making happy people.” However, this is unlikely to be what is going on for everyone who prefers the smaller civilization: If there was a way to double the size of the smaller population while keeping the quality of life perfect, many people would likely consider this option both positive and important. This suggests that some people do care (intrinsically) about adding more lives and/or happiness to the world. But considering that they would not go for the larger civilization in the Repugnant Conclusion thought experiment above, it also seems that they implicitly place diminishing returns on additional happiness, i.e. that the bigger you go, the more making an overall happy population larger is no longer (that) important.
By contrast, people are much less likely to place diminishing returns on reducing suffering – at least17 insofar as the disvalue of extreme suffering, or the suffering in lives that on the whole do not seem worth living, is concerned. Most people would say that no matter the size of a (finite) population of suffering beings, adding more suffering beings would always remain equally bad.
It should be noted that incorporating diminishing returns to things of positive value into a normative theory is difficult to do in ways that do not seem unsatisfyingly arbitrary. However, perhaps the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Hence, given the reasonable assumption that the negative value of adding extra lives with negative welfare does not decrease relatively to population size, a proportional expansion in the population size can turn a good population into a bad one—a version of the so-called “Absurd Conclusion” (Parfit 1984). A population of one million people enjoying very high positive welfare and one person with negative welfare seems intuitively to be a good population. However, since there is a limit to the positive value of positive welfare but no limit to the negative value of negative welfare, proportional expansions (two million lives with positive welfare and two lives with negative welfare, three million lives with positive welfare and three lives with negative welfare, and so forth) will in the end yield a bad population.
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims.
I agree with that.
The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
(I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)
I also agree with that, except that I think axiology is the one place where I’m most confident that there’s no convergence. :)
Maybe my anti-realism is best described as “some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined.”
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral)
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined.
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
Prescriptive Anti-realism
An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1. Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2. Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3. Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?) most Error Theorists, Quirrell
4. Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I agree with that.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
[1]
[2]
[3]