This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)