I appear to be accidentally writing a sequence on moral realism, or at least explaining what moral realists like about moral realism—for those who are perplexed about why it would be worth wanting or how anyone could find it plausible.
Many philosophers outside this community have an instinct that normative anti-realism (about any irreducible facts about what you should do) is self-defeating, because it includes a denial that there are any final, buck-stopping answers to why we should believe something based on evidence, and therefore no truly, ultimately impartial way to even express the claim that you ought to believe something. I think that this is a good, but not perfect, argument. My experience has been that traditional analytic philosophers find this sort of reasoning appealing, in part because of the legacy of how Kant tried to deduce the logically necessary preconditions for having any kind of judgement or experience. I don’t find it particularly appealing, but I think that there’s a case for it here, if there ever was.
Irreducible Normativity and Recursive Justification
On normative antirealism, what ‘you shouldn’t believe that 2+2=5’ really means is just that someone else’s mind has different basic operations to yours. It is obvious that we can’t stop using normative concepts, and couldn’t use the concept ‘should’ to mean ‘in accordance with the basic operations of my mind’, but this isn’t an easy case of reduction like Water=H20. There is a deep sense in which normative terms really can’t mean what we think they mean if normative antirealism is true. This must be accounted for by either a deep and comprehensive question-dissolving, or by irreducible normative facts.
This ‘normative indispensability’ is not an argument, but it can be made into one:
1) On normative anti-realism there are no facts about which beliefs are justified. So there are no facts about whether normative anti-realism is justified. Therefore, normative anti-realism is self-defeating.
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism. So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
If you’ve read the sequences, you are not going to like this argument, at all—it sounds like the ‘zombie’ argument, and it sounds like someone asking for an exception to reductionism—which is just what it is. This is the alternative:
Where moral judgment is concerned, it’s logic all the way down. ALL the way down. Any frame of reference where you’re worried that it’s really no better to do what’s right then to maximize paperclips… well, that really part has a truth-condition (or what does the “really” mean?) and as soon as you write out the truth-condition you’re going to end up with yet another ordering over actions or algorithms or meta-algorithms or something. And since grinding up the universe won’t and shouldn’t yield any miniature ‘>’ tokens, it must be a logical ordering. And so whatever logical ordering it is you’re worried about, it probably does produce ‘life > paperclips’ - but Clippy isn’t computing that logical fact any more than your pocket calculator is computing it.
Logical facts have no power to directly affect the universe except when some part of the universe is computing them, and morality is (and should be) logic, not physics.
If it’s truly ‘logic all the way down’ and there are no ‘> tokens’ over particular functional arrangements of matter, including the ones you used to form your beliefs, then you have to give up on knowing reality as it is. This isn’t the classic sense in which we all have an ‘imperfect model’ of reality as it is. If you give up on irreducible epistemic facts you give up knowing anything, probabilistically or otherwise, about reality-as-it-is, because there are no fundamentally, objectively, mind-independent ways you should or shouldn’t form beliefs about external reality. So you can’t say you’re better than the pebble with ‘2+2=5’ written on it, except descriptively, in that the causal process that produced the pebble contradict the one that produced 2+2=4 in your brain.
What’s the alternative? If we don’t deny this consequence of normative antirealism, we have two options. One is the route of dissolving the question, by analogy with how reductionism has worked in the past, the other is to say that there are irreducible normative facts. In order to dissolve the question correctly, it needs to be in a way that shows a denial of epistemic facts isn’t damaging, doesn’t lead to epistemological relativism or scepticism. We can’t simply declare that normative facts can’t possibly exist—otherwise you’re vulnerable to the argument 2). David Chalmers talks about question-dissolving for qualia:
You’ve also got to explain why we have these experiences. I guess Dennett’s line is to reject the idea there are these first-person data and say all you do, if you’ve can explain why you believe why you say there are those things. Why do you believe there are those things? Then that’s good enough. I find that line which Dennett has pursued inconsistently over the years, but insofar as that’s his line, I find that a fascinating and powerful line. I do find it ultimately unbelievable because I just don’t think it explains the data, but it does if developed properly, have the view that it could actually explain why people find it unbelievable, and that would be a virtue in its favor.
David Chalmers of all people says that, even if he can’t conceive of how a deep reduction of Qualia might make their non-existence non-paradoxical, he might change his mind if he ever actually saw such a reduction! I say the same about epistemic and therefore normative facts. But crucially, no-one has solved this ‘meta problem’ for Qualia or for normative facts. There are partial hints of explanations for both, but there’s no full debunking argument that makes epistemic antirealism seem completely non-damaging and thus removes 2). I can’t imagine what such an account could look like, but the point of the ‘dissolving the question’ strategy is that it often isn’t imaginable in advance because your concepts are confused, so I’ll just leave that point. In the moral domain, the convergence arguments point against question-dissolving because they suggest the concept of normativity is solid and reliable. If those arguments fall, then question-dissolving looks more likely.
That’s one route. What of the other?
The alternative is to say that there are irreducible normative facts. This is counter-reductionist, counter-intuitive and strange. Two things that can make it less strange: these facts are not supposed to be intrinsically motivational (that violates the orthogonality thesis and is not permitted by the laws of physics) and they are not required to be facts about objects, like Platonic forms outside of time and space. They can be logical facts of the sort Eliezer talked about, but just a particular kind of logical fact, that has the property of being normative, the one you should follow. They don’t need to ‘exist’ as such. What epistemic facts would do is say certain reflective equilibria, certain arrangements of ‘reflecting on your own beliefs, using your current mind’ are the right ones, and others are the wrong ones. It doesn’t deny that this is the case:
So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I’m not halting the chain of examination at the point that I encounter Occam’s Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?
Indeed, no matter what I did with this dilemma, it would be me doing it. Even if I trusted something else, like some computer program, it would be my own decision to trust it.
Irreducible normativity just says that there is a meaningful, mind-independent difference between the virtuous and degenerate cases of recursive justification of your beliefs, rather than just ways of recursively justifying our beliefs that are… different.
If you buy that anti-realism is self-defeating, and think that we can know something about the normative domain via moral and non-moral convergence, then we have actual positive reasons to believe that normative facts are knowable (the convergence arguments help establish that moral facts aren’t and couldn’t be random things like stacking pebbles in prime-numbered heaps).
These two arguments are quite different—one is empirical (that our practical, epistemic and moral reasons tend towards agreement over time and after conceptual analysis and reflective justification) and the other is conceptual (that if you start out with normative concepts you are forced into using them).
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral):
• Accept Convergence and Reject Normativity: prescriptivist anti-realism. There are (probably) no mind-independent moral facts, but the nature of rationality is such that our values usually cohere and are stable, so we can treat morality as a more-or-less inflexible logical ordering over outcomes. • Accept Convergence and Accept Normativity: moral realism. There are moral facts and we can know them • Reject Convergence and Reject Normativity: nihilist anti-realism. Morality is seen as a ‘personal life project’ about which we can’t expect much agreement or even within-person coherence • Reject Convergence and Accept Normativity: sceptical moral realism. Normative facts exist, but moral facts may not exist, or may be forever unknowable.
Even if what exactly normative facts are is hard to conceive, perhaps we can still know some things about them. Eliezer ended his post arguing for universalized, prescriptive anti-realism with a quote from HPMOR. Here’s a different quote:
“Sometimes,” Professor Quirrell said in a voice so quiet it almost wasn’t there, “when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been. I cannot seem to imagine what that place might be, and if I can’t even imagine it then how can I believe it exists? And yet the universe is so very, very wide, and perhaps it might exist anyway? …
An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1.Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2.Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3.Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?)most Error Theorists, Quirrell
4.Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
Harry floundered for words and then decided to simply go with the obvious. “First of all, just because I want to hurt someone doesn’t mean it’s right—”
“What makes something right, if not your wanting it?”
“Ah,” Harry said, “preference utilitarianism.”
“Pardon me?” said Professor Quirrell.
“It’s the ethical theory that the good is what satisfies the preferences of the most people—”
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
While there are some people who argue for accepting the repugnant conclusion (Tännsjö, 2004), most people would probably prefer the smaller but happier civilization – at least under some circumstances. One explanation for this preference might lie in intuition one discussed above, “Making people happy rather than making happy people.” However, this is unlikely to be what is going on for everyone who prefers the smaller civilization: If there was a way to double the size of the smaller population while keeping the quality of life perfect, many people would likely consider this option both positive and important. This suggests that some people do care (intrinsically) about adding more lives and/or happiness to the world. But considering that they would not go for the larger civilization in the Repugnant Conclusion thought experiment above, it also seems that they implicitly place diminishing returns on additional happiness, i.e. that the bigger you go, the more making an overall happy population larger is no longer (that) important.
By contrast, people are much less likely to place diminishing returns on reducing suffering – at least17 insofar as the disvalue of extreme suffering, or the suffering in lives that on the whole do not seem worth living, is concerned. Most people would say that no matter the size of a (finite) population of suffering beings, adding more suffering beings would always remain equally bad.
It should be noted that incorporating diminishing returns to things of positive value into a normative theory is difficult to do in ways that do not seem unsatisfyingly arbitrary. However, perhaps the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Hence, given the reasonable assumption that the negative value of adding extra lives with negative welfare does not decrease relatively to population size, a proportional expansion in the population size can turn a good population into a bad one—a version of the so-called “Absurd Conclusion” (Parfit 1984). A population of one million people enjoying very high positive welfare and one person with negative welfare seems intuitively to be a good population. However, since there is a limit to the positive value of positive welfare but no limit to the negative value of negative welfare, proportional expansions (two million lives with positive welfare and two lives with negative welfare, three million lives with positive welfare and three lives with negative welfare, and so forth) will in the end yield a bad population.
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims.
I agree with that.
The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
(I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)
I also agree with that, except that I think axiology is the one place where I’m most confident that there’s no convergence. :)
Maybe my anti-realism is best described as “some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined.”
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral)
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined.
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
I appear to be accidentally writing a sequence on moral realism, or at least explaining what moral realists like about moral realism—for those who are perplexed about why it would be worth wanting or how anyone could find it plausible.
Many philosophers outside this community have an instinct that normative anti-realism (about any irreducible facts about what you should do) is self-defeating, because it includes a denial that there are any final, buck-stopping answers to why we should believe something based on evidence, and therefore no truly, ultimately impartial way to even express the claim that you ought to believe something. I think that this is a good, but not perfect, argument. My experience has been that traditional analytic philosophers find this sort of reasoning appealing, in part because of the legacy of how Kant tried to deduce the logically necessary preconditions for having any kind of judgement or experience. I don’t find it particularly appealing, but I think that there’s a case for it here, if there ever was.
Irreducible Normativity and Recursive Justification
On normative antirealism, what ‘you shouldn’t believe that 2+2=5’ really means is just that someone else’s mind has different basic operations to yours. It is obvious that we can’t stop using normative concepts, and couldn’t use the concept ‘should’ to mean ‘in accordance with the basic operations of my mind’, but this isn’t an easy case of reduction like Water=H20. There is a deep sense in which normative terms really can’t mean what we think they mean if normative antirealism is true. This must be accounted for by either a deep and comprehensive question-dissolving, or by irreducible normative facts.
This ‘normative indispensability’ is not an argument, but it can be made into one:
If you’ve read the sequences, you are not going to like this argument, at all—it sounds like the ‘zombie’ argument, and it sounds like someone asking for an exception to reductionism—which is just what it is. This is the alternative:
If it’s truly ‘logic all the way down’ and there are no ‘> tokens’ over particular functional arrangements of matter, including the ones you used to form your beliefs, then you have to give up on knowing reality as it is. This isn’t the classic sense in which we all have an ‘imperfect model’ of reality as it is. If you give up on irreducible epistemic facts you give up knowing anything, probabilistically or otherwise, about reality-as-it-is, because there are no fundamentally, objectively, mind-independent ways you should or shouldn’t form beliefs about external reality. So you can’t say you’re better than the pebble with ‘2+2=5’ written on it, except descriptively, in that the causal process that produced the pebble contradict the one that produced 2+2=4 in your brain.
What’s the alternative? If we don’t deny this consequence of normative antirealism, we have two options. One is the route of dissolving the question, by analogy with how reductionism has worked in the past, the other is to say that there are irreducible normative facts. In order to dissolve the question correctly, it needs to be in a way that shows a denial of epistemic facts isn’t damaging, doesn’t lead to epistemological relativism or scepticism. We can’t simply declare that normative facts can’t possibly exist—otherwise you’re vulnerable to the argument 2). David Chalmers talks about question-dissolving for qualia:
David Chalmers of all people says that, even if he can’t conceive of how a deep reduction of Qualia might make their non-existence non-paradoxical, he might change his mind if he ever actually saw such a reduction! I say the same about epistemic and therefore normative facts. But crucially, no-one has solved this ‘meta problem’ for Qualia or for normative facts. There are partial hints of explanations for both, but there’s no full debunking argument that makes epistemic antirealism seem completely non-damaging and thus removes 2). I can’t imagine what such an account could look like, but the point of the ‘dissolving the question’ strategy is that it often isn’t imaginable in advance because your concepts are confused, so I’ll just leave that point. In the moral domain, the convergence arguments point against question-dissolving because they suggest the concept of normativity is solid and reliable. If those arguments fall, then question-dissolving looks more likely.
That’s one route. What of the other?
The alternative is to say that there are irreducible normative facts. This is counter-reductionist, counter-intuitive and strange. Two things that can make it less strange: these facts are not supposed to be intrinsically motivational (that violates the orthogonality thesis and is not permitted by the laws of physics) and they are not required to be facts about objects, like Platonic forms outside of time and space. They can be logical facts of the sort Eliezer talked about, but just a particular kind of logical fact, that has the property of being normative, the one you should follow. They don’t need to ‘exist’ as such. What epistemic facts would do is say certain reflective equilibria, certain arrangements of ‘reflecting on your own beliefs, using your current mind’ are the right ones, and others are the wrong ones. It doesn’t deny that this is the case:
Irreducible normativity just says that there is a meaningful, mind-independent difference between the virtuous and degenerate cases of recursive justification of your beliefs, rather than just ways of recursively justifying our beliefs that are… different.
If you buy that anti-realism is self-defeating, and think that we can know something about the normative domain via moral and non-moral convergence, then we have actual positive reasons to believe that normative facts are knowable (the convergence arguments help establish that moral facts aren’t and couldn’t be random things like stacking pebbles in prime-numbered heaps).
These two arguments are quite different—one is empirical (that our practical, epistemic and moral reasons tend towards agreement over time and after conceptual analysis and reflective justification) and the other is conceptual (that if you start out with normative concepts you are forced into using them).
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral):
• Accept Convergence and Reject Normativity: prescriptivist anti-realism. There are (probably) no mind-independent moral facts, but the nature of rationality is such that our values usually cohere and are stable, so we can treat morality as a more-or-less inflexible logical ordering over outcomes.
• Accept Convergence and Accept Normativity: moral realism. There are moral facts and we can know them
• Reject Convergence and Reject Normativity: nihilist anti-realism. Morality is seen as a ‘personal life project’ about which we can’t expect much agreement or even within-person coherence
• Reject Convergence and Accept Normativity: sceptical moral realism. Normative facts exist, but moral facts may not exist, or may be forever unknowable.
Even if what exactly normative facts are is hard to conceive, perhaps we can still know some things about them. Eliezer ended his post arguing for universalized, prescriptive anti-realism with a quote from HPMOR. Here’s a different quote:
Prescriptive Anti-realism
An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1. Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2. Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3. Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?) most Error Theorists, Quirrell
4. Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I agree with that.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
[1]
[2]
[3]