I think that it is a more damaging mistake to think moral antirealism is true when realism is true than vice versa, but I agree with you that the difference is nowhere near infinite, and doesn’t give you a strong wager.However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
Epistemic anti-realism
Cool, I’m happy that this argument appeals to a moral realist! ….
...I don’t think this argument (“anti-realism is self-defeating”) works well in this context. If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that...
To summarize: There’s no infinitely strong wager for moral realism.
I agree that there is no infinitely strong wager for moral realism. As soon as moral realists start making empirical claims about the consequences of realism (that convergence is likely), you can’t say that moral realism is true necessarily or that there is an infinitely strong prior in favour of it. An AI that knows that your idealised preferences don’t cohere could always show up and prove you wrong, just as you say. If I were Bob in this dialogue, I’d happily concede that moral anti-realism is true.If (supposing it were the case) there were not much consensus on anything to do with morality (“The rocks don’t connect...”), someone who pointed that out and said ‘from that I infer that moral realism is unlikely’ wouldn’t be saying anything self-defeating. Moral anti-realism is not self-defeating, either on its own terms or on the terms of a ‘mixed view’ like I describe here:
We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms, the other in which there are mind-independent facts about which of our beliefs are justified...
However, I do think that there is an infinitely strong wager in favour of normative realism and that normative anti-realism is self-defeating on the terms of a ‘mixed view’ that starts out considering the two alternatives like that given above. This wager is because of the subset of normative facts that are epistemic facts.The example that I used was about ‘how beliefs are justified’. Maybe I wasn’t clear, but I was referring to beliefs in general, not to beliefs about morality. Epistemic facts, e.g. that you should believe something if there is sufficient amount of evidence, are a kind of normative fact. You noted them on your list here.So, the infinite wager argument goes like this -
1) On normative anti-realism there are no facts about which beliefs are justified. So there are no facts about whether normative anti-realism is justified. Therefore, normative anti-realism is self-defeating.
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism.So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
Evidence for epistemic facts?
I find it interesting the imagined scenario you give in #5 essentially skips over argument 2) as something that is impossible to judge:
AI: Only in a sense I don’t endorse as such! We’ve gone full circle. I take it that you believe that just like there might be irreducibly normative facts about how to do good, the same goes for irreducible normative facts about how to reason?
Bob: Indeed, that has always been my view.
AI: Of course, that concept is just as incomprehensible to me.
The AI doesn’t give evidence against there being irreducible normative facts about how to reason, it just states it finds the concept incoherent, unlike the (hypothetical) evidence that the AI piles on against moral realism (for example, that people’s moral preferences don’t cohere).Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don’t care about the realist’s sense of ‘self-defeating’. The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren’t sure if it applies—and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn’t be justified.However, this doesn’t establish moral realism—as you said earlier, moral anti-realism is not self-defeating.
If anti-realism is just the claim “the rocks or free-floating mountain slopes that we’re seeing don’t connect to form a full mountain,” I don’t see what’s self-defeating about that
Combining convergence arguments and the infinite wager
If you want to argue for moral realism, then you need evidence for moral realism, which comes in the form of convergence arguments. But the above argument is still relevant, because the convergence and ‘infinite wager’ arguments support each other. The reason 2) would be bolstered by the success of convergence arguments (in epistemology, or ethics, or any other normative domain) is that convergence arguments increase our confidence that normativity is a coherent concept—which is what 2) needs to work. It certainly seems coherent to me, but this cannot be taken as self-evident since various people have claimed that they or others don’t have the concept.I also think that 2) is some evidence in favour of moral realism, because it undermines some of the strongest antirealist arguments.
By contrast, for versions of normativity that depend on claims about a normative domain’s structure, the partners-in-crime arguments don’t even apply. After all, just because philosophers might—hypothetically, under idealized circumstances—agree on the answers to all (e.g.) decision-theoretic questions doesn’t mean that they would automatically also find agreement on moral questions.[29] On this interpretation of realism, all domains have to be evaluated separately
I don’t think this is right. What I’m giving here is such a ‘partners-in-crime’ argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the ‘queerness argument’ that normative facts are incoherent or too strange to be allowed into our ontology. The ‘partners-in-crime’/‘infinite wager’ undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough—depending on the details.
I agree that it then shifts the arena to convergence arguments. I will discuss them in posts 6 and 7.
So, with all that out of the way, when we start discussing the convergence arguments, the burden of proof on them is not colossal. If we already have reason to suspect that there are normative facts out there, perhaps some of them are moral facts. But if we found a random morass of different considerations under the name ‘morality’ then we’d be stuck concluding that there might be some normative facts, but maybe they are only epistemic facts, with nothing else in the domain of normativity.I don’t think this is the case, but I will have to wait until your posts on that topic—I look forward to them!All I’ll say is that I don’t consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement. (I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core—that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.) If Kant could have been a utilitarian and never realised it, then those who are appalled by the repugnant conclusion could certainly converge to accept it after enough ideal reflection!
Belief in God, or in many gods, prevented the free development of moral reasoning. Disbelief in God, openly admitted by a majority, is a recent event, not yet completed. Because this event is so recent, Non-Religious Ethics is at a very early stage. We cannot yet predict whether, as in Mathematics, we will all reach agreement. Since we cannot know how Ethics will develop, it is not irrational to have high hopes.
How to make anti-realism existentially satisfying
Instead of “utilitarianism as the One True Theory,” we consider it as “utilitarianism as a personal, morally-inspired life goal...
”While this concession is undoubtedly frustrating, proclaiming others to be objectively wrong rarely accomplished anything anyway. It’s not as though moral disagreements—or disagreements in people’s life choices—would go away if we adopted moral realism.
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a ‘personal life goal’ makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.Speaking as someone inclined towards moral realism, the most inspiring presentations I’ve ever seen of anti-realism are those given by Peter Singer in The Expanding Circle and Eliezer Yudkowsky in his metaethics sequence. Probably not by coincidence—both of these people are inclined to be realists. Eliezer said as much, and Singer later became a realist after reading Parfit. Eliezer Yudkowsky on ‘The Meaning of Right’:
The apparent objectivity of morality has just been explained—and not explained away. For indeed, if someone slipped me a pill that made me want to kill people, nonetheless, it would not be right to kill people. Perhaps I would actually kill people, in that situation—but that is because something other than morality would be controlling my actions.
Morality is not just subjunctively objective, but subjectively objective. I experience it as something I cannot change. Even after I know that it’s myself who computes this 1-place function, and not a rock somewhere—even after I know that I will not find any star or mountain that computes this function, that only upon me is it written—even so, I find that I wish to save lives, and that even if I could change this by an act of will, I would not choose to do so. I do not wish to reject joy, or beauty, or freedom. What else would I do instead? I do not wish to reject the Gift that natural selection accidentally barfed into me.
And Singer in the Expanding Circle:
“Whether particular people with the capacity to take an objective point of view actually do take this objective viewpoint into account when they act will depend on the strength of their desire to avoid inconsistency between the way they reason publicly and the way they act.”
These are both anti-realist claims. They define ‘right’ descriptively and procedurally as arising from what we would want to do under some ideal circumstances, and rigidifies on the output of that idealization, not on what we want. To a realist, this is far more appealing than a mere “personal, morally-inspired life goal”, and has the character of ‘external moral constraint’, even if it’s not really ultimately external, but just the result of immovable or basic facts about how your mind will, in fact work, including facts about how your mind finds inconsistencies in its own beliefs. This is a feature, not a bug:
According to utilitarianism, what people ought to spend their time on depends not on what they care about but also on how they can use their abilities to do the most good. What people most want to do only factors into the equation in the form of motivational constraints, constraints about which self-concepts or ambitious career paths would be long-term sustainable. Williams argues that this utilitarian thought process alienates people from their actions since it makes it no longer the case that actions flow from the projects and attitudes with which these people most strongly identify...
The exact thing that Williams calls ‘alienating’ is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this ‘alienation’ if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you’d reframe epistemic or practical reasoning on the anti-realist view. Then it seems more ‘external’ and less relativistic.One thing this framing makes clearer, which you don’t deny but don’t mention, is that anti-realism does not imply relativism.
In that case, normative discussions can remain fruitful. Unfortunately, this won’t work in all instances. There will be cases where no matter how outrageous we find someone’s choices, we cannot say that they are committing an error of reasoning.
What we can say, on anti-realism as characterised by Singer and Yudkowsky, is that they are making an error of morality. We are not obligated (how could we be?) towards relativism, permissiveness or accepting values incompatible with our own on anti-realism. Ultimately, you can just say that ‘I am right and you are wrong’.That’s one of the major upsides of anti-realism to the realist—you still get to make universal, prescriptive claims and follow them through, and follow them through because they are morally right, and if people disagree with you then they are morally wrong and you aren’t obligated to listen to their arguments if they arise from fundamentally incompatible values. Put that way, anti-realism is much more appealing to someone with realist inclinations.
I appear to be accidentally writing a sequence on moral realism, or at least explaining what moral realists like about moral realism—for those who are perplexed about why it would be worth wanting or how anyone could find it plausible.
Many philosophers outside this community have an instinct that normative anti-realism (about any irreducible facts about what you should do) is self-defeating, because it includes a denial that there are any final, buck-stopping answers to why we should believe something based on evidence, and therefore no truly, ultimately impartial way to even express the claim that you ought to believe something. I think that this is a good, but not perfect, argument. My experience has been that traditional analytic philosophers find this sort of reasoning appealing, in part because of the legacy of how Kant tried to deduce the logically necessary preconditions for having any kind of judgement or experience. I don’t find it particularly appealing, but I think that there’s a case for it here, if there ever was.
Irreducible Normativity and Recursive Justification
On normative antirealism, what ‘you shouldn’t believe that 2+2=5’ really means is just that someone else’s mind has different basic operations to yours. It is obvious that we can’t stop using normative concepts, and couldn’t use the concept ‘should’ to mean ‘in accordance with the basic operations of my mind’, but this isn’t an easy case of reduction like Water=H20. There is a deep sense in which normative terms really can’t mean what we think they mean if normative antirealism is true. This must be accounted for by either a deep and comprehensive question-dissolving, or by irreducible normative facts.
This ‘normative indispensability’ is not an argument, but it can be made into one:
1) On normative anti-realism there are no facts about which beliefs are justified. So there are no facts about whether normative anti-realism is justified. Therefore, normative anti-realism is self-defeating.
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism. So the wager argument for normative realism actually goes like this -
2) We have two competing ways of understanding how beliefs are justified. One is where we have anti-realist ‘justification’ for our beliefs, in purely descriptive terms of what we will probably end up believing given basic facts about how our minds work in some idealised situation. The other is where there are mind-independent facts about which of our beliefs are justified. The latter is more plausible because of 1).
If you’ve read the sequences, you are not going to like this argument, at all—it sounds like the ‘zombie’ argument, and it sounds like someone asking for an exception to reductionism—which is just what it is. This is the alternative:
Where moral judgment is concerned, it’s logic all the way down. ALL the way down. Any frame of reference where you’re worried that it’s really no better to do what’s right then to maximize paperclips… well, that really part has a truth-condition (or what does the “really” mean?) and as soon as you write out the truth-condition you’re going to end up with yet another ordering over actions or algorithms or meta-algorithms or something. And since grinding up the universe won’t and shouldn’t yield any miniature ‘>’ tokens, it must be a logical ordering. And so whatever logical ordering it is you’re worried about, it probably does produce ‘life > paperclips’ - but Clippy isn’t computing that logical fact any more than your pocket calculator is computing it.
Logical facts have no power to directly affect the universe except when some part of the universe is computing them, and morality is (and should be) logic, not physics.
If it’s truly ‘logic all the way down’ and there are no ‘> tokens’ over particular functional arrangements of matter, including the ones you used to form your beliefs, then you have to give up on knowing reality as it is. This isn’t the classic sense in which we all have an ‘imperfect model’ of reality as it is. If you give up on irreducible epistemic facts you give up knowing anything, probabilistically or otherwise, about reality-as-it-is, because there are no fundamentally, objectively, mind-independent ways you should or shouldn’t form beliefs about external reality. So you can’t say you’re better than the pebble with ‘2+2=5’ written on it, except descriptively, in that the causal process that produced the pebble contradict the one that produced 2+2=4 in your brain.
What’s the alternative? If we don’t deny this consequence of normative antirealism, we have two options. One is the route of dissolving the question, by analogy with how reductionism has worked in the past, the other is to say that there are irreducible normative facts. In order to dissolve the question correctly, it needs to be in a way that shows a denial of epistemic facts isn’t damaging, doesn’t lead to epistemological relativism or scepticism. We can’t simply declare that normative facts can’t possibly exist—otherwise you’re vulnerable to the argument 2). David Chalmers talks about question-dissolving for qualia:
You’ve also got to explain why we have these experiences. I guess Dennett’s line is to reject the idea there are these first-person data and say all you do, if you’ve can explain why you believe why you say there are those things. Why do you believe there are those things? Then that’s good enough. I find that line which Dennett has pursued inconsistently over the years, but insofar as that’s his line, I find that a fascinating and powerful line. I do find it ultimately unbelievable because I just don’t think it explains the data, but it does if developed properly, have the view that it could actually explain why people find it unbelievable, and that would be a virtue in its favor.
David Chalmers of all people says that, even if he can’t conceive of how a deep reduction of Qualia might make their non-existence non-paradoxical, he might change his mind if he ever actually saw such a reduction! I say the same about epistemic and therefore normative facts. But crucially, no-one has solved this ‘meta problem’ for Qualia or for normative facts. There are partial hints of explanations for both, but there’s no full debunking argument that makes epistemic antirealism seem completely non-damaging and thus removes 2). I can’t imagine what such an account could look like, but the point of the ‘dissolving the question’ strategy is that it often isn’t imaginable in advance because your concepts are confused, so I’ll just leave that point. In the moral domain, the convergence arguments point against question-dissolving because they suggest the concept of normativity is solid and reliable. If those arguments fall, then question-dissolving looks more likely.
That’s one route. What of the other?
The alternative is to say that there are irreducible normative facts. This is counter-reductionist, counter-intuitive and strange. Two things that can make it less strange: these facts are not supposed to be intrinsically motivational (that violates the orthogonality thesis and is not permitted by the laws of physics) and they are not required to be facts about objects, like Platonic forms outside of time and space. They can be logical facts of the sort Eliezer talked about, but just a particular kind of logical fact, that has the property of being normative, the one you should follow. They don’t need to ‘exist’ as such. What epistemic facts would do is say certain reflective equilibria, certain arrangements of ‘reflecting on your own beliefs, using your current mind’ are the right ones, and others are the wrong ones. It doesn’t deny that this is the case:
So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I’m not halting the chain of examination at the point that I encounter Occam’s Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?
Indeed, no matter what I did with this dilemma, it would be me doing it. Even if I trusted something else, like some computer program, it would be my own decision to trust it.
Irreducible normativity just says that there is a meaningful, mind-independent difference between the virtuous and degenerate cases of recursive justification of your beliefs, rather than just ways of recursively justifying our beliefs that are… different.
If you buy that anti-realism is self-defeating, and think that we can know something about the normative domain via moral and non-moral convergence, then we have actual positive reasons to believe that normative facts are knowable (the convergence arguments help establish that moral facts aren’t and couldn’t be random things like stacking pebbles in prime-numbered heaps).
These two arguments are quite different—one is empirical (that our practical, epistemic and moral reasons tend towards agreement over time and after conceptual analysis and reflective justification) and the other is conceptual (that if you start out with normative concepts you are forced into using them).
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral):
• Accept Convergence and Reject Normativity: prescriptivist anti-realism. There are (probably) no mind-independent moral facts, but the nature of rationality is such that our values usually cohere and are stable, so we can treat morality as a more-or-less inflexible logical ordering over outcomes. • Accept Convergence and Accept Normativity: moral realism. There are moral facts and we can know them • Reject Convergence and Reject Normativity: nihilist anti-realism. Morality is seen as a ‘personal life project’ about which we can’t expect much agreement or even within-person coherence • Reject Convergence and Accept Normativity: sceptical moral realism. Normative facts exist, but moral facts may not exist, or may be forever unknowable.
Even if what exactly normative facts are is hard to conceive, perhaps we can still know some things about them. Eliezer ended his post arguing for universalized, prescriptive anti-realism with a quote from HPMOR. Here’s a different quote:
“Sometimes,” Professor Quirrell said in a voice so quiet it almost wasn’t there, “when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been. I cannot seem to imagine what that place might be, and if I can’t even imagine it then how can I believe it exists? And yet the universe is so very, very wide, and perhaps it might exist anyway? …
An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1.Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2.Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3.Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?)most Error Theorists, Quirrell
4.Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
Harry floundered for words and then decided to simply go with the obvious. “First of all, just because I want to hurt someone doesn’t mean it’s right—”
“What makes something right, if not your wanting it?”
“Ah,” Harry said, “preference utilitarianism.”
“Pardon me?” said Professor Quirrell.
“It’s the ethical theory that the good is what satisfies the preferences of the most people—”
“No,” Professor Quirrell said. His fingers rubbed the bridge of his nose. “I don’t think that’s quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like ‘right’ to things they want to do, but how could we possibly act on anything but our own desires?”
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
While there are some people who argue for accepting the repugnant conclusion (Tännsjö, 2004), most people would probably prefer the smaller but happier civilization – at least under some circumstances. One explanation for this preference might lie in intuition one discussed above, “Making people happy rather than making happy people.” However, this is unlikely to be what is going on for everyone who prefers the smaller civilization: If there was a way to double the size of the smaller population while keeping the quality of life perfect, many people would likely consider this option both positive and important. This suggests that some people do care (intrinsically) about adding more lives and/or happiness to the world. But considering that they would not go for the larger civilization in the Repugnant Conclusion thought experiment above, it also seems that they implicitly place diminishing returns on additional happiness, i.e. that the bigger you go, the more making an overall happy population larger is no longer (that) important.
By contrast, people are much less likely to place diminishing returns on reducing suffering – at least17 insofar as the disvalue of extreme suffering, or the suffering in lives that on the whole do not seem worth living, is concerned. Most people would say that no matter the size of a (finite) population of suffering beings, adding more suffering beings would always remain equally bad.
It should be noted that incorporating diminishing returns to things of positive value into a normative theory is difficult to do in ways that do not seem unsatisfyingly arbitrary. However, perhaps the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Hence, given the reasonable assumption that the negative value of adding extra lives with negative welfare does not decrease relatively to population size, a proportional expansion in the population size can turn a good population into a bad one—a version of the so-called “Absurd Conclusion” (Parfit 1984). A population of one million people enjoying very high positive welfare and one person with negative welfare seems intuitively to be a good population. However, since there is a limit to the positive value of positive welfare but no limit to the negative value of negative welfare, proportional expansions (two million lives with positive welfare and two lives with negative welfare, three million lives with positive welfare and three lives with negative welfare, and so forth) will in the end yield a bad population.
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims.
I agree with that.
The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
(I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core - that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.)
I also agree with that, except that I think axiology is the one place where I’m most confident that there’s no convergence. :)
Maybe my anti-realism is best described as “some moral facts exist (in a weak sense as far as other realist proposals go), but morality is underdetermined.”
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral)
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined.
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
I got into a discussion with Lucas Gloor on the EA forum about these issues. I’m copying some of what I wrote here as it’s a continuation of that.
I think that it is a more damaging mistake to think moral antirealism is true when realism is true than vice versa, but I agree with you that the difference is nowhere near infinite, and doesn’t give you a strong wager.However, I do think that normative anti-realism is self-defeating, assuming you start out with normative concepts (though not an assumption that those concepts apply to anything). I consider this argument to be step 1 in establishing moral realism, nowhere near the whole argument.
Epistemic anti-realism
I agree that there is no infinitely strong wager for moral realism. As soon as moral realists start making empirical claims about the consequences of realism (that convergence is likely), you can’t say that moral realism is true necessarily or that there is an infinitely strong prior in favour of it. An AI that knows that your idealised preferences don’t cohere could always show up and prove you wrong, just as you say. If I were Bob in this dialogue, I’d happily concede that moral anti-realism is true.If (supposing it were the case) there were not much consensus on anything to do with morality (“The rocks don’t connect...”), someone who pointed that out and said ‘from that I infer that moral realism is unlikely’ wouldn’t be saying anything self-defeating. Moral anti-realism is not self-defeating, either on its own terms or on the terms of a ‘mixed view’ like I describe here:
However, I do think that there is an infinitely strong wager in favour of normative realism and that normative anti-realism is self-defeating on the terms of a ‘mixed view’ that starts out considering the two alternatives like that given above. This wager is because of the subset of normative facts that are epistemic facts.The example that I used was about ‘how beliefs are justified’. Maybe I wasn’t clear, but I was referring to beliefs in general, not to beliefs about morality. Epistemic facts, e.g. that you should believe something if there is sufficient amount of evidence, are a kind of normative fact. You noted them on your list here.So, the infinite wager argument goes like this -
Except that doesn’t work! Because on normative anti-realism, the whole idea of external facts about which beliefs are justified is mistaken, and instead we all just have fundamental principles (whether moral or epistemic) that we use but don’t question, which means that holding a belief without (the realist’s notion of) justification is consistent with anti-realism.So the wager argument for normative realism actually goes like this -
Evidence for epistemic facts?
I find it interesting the imagined scenario you give in #5 essentially skips over argument 2) as something that is impossible to judge:
The AI doesn’t give evidence against there being irreducible normative facts about how to reason, it just states it finds the concept incoherent, unlike the (hypothetical) evidence that the AI piles on against moral realism (for example, that people’s moral preferences don’t cohere).Either you think some basic epistemic facts have to exist for reasoning to get off the ground and therefore that epistemic anti-realism is self-defeating, or you are an epistemic anti-realist and don’t care about the realist’s sense of ‘self-defeating’. The AI is in the latter camp, but not because of evidence, the way that it’s a moral anti-realist (...However, you haven’t established that all normative statements work the same way—that was just an intuition...), but just because it’s constructed in such a way that it lacks the concept of an epistemic reason.So, if this AI is constructed such that irreducibly normative facts about how to reason aren’t comprehensible to it, it only has access to argument 1), which doesn’t work. It can’t imagine 2).However, I think that we humans are in a situation where 2) is open to consideration, where we have the concept of a reason for believing something, but aren’t sure if it applies—and if we are in that situation, I think we are dragged towards thinking that it must apply, because otherwise our beliefs wouldn’t be justified.However, this doesn’t establish moral realism—as you said earlier, moral anti-realism is not self-defeating.
Combining convergence arguments and the infinite wager
If you want to argue for moral realism, then you need evidence for moral realism, which comes in the form of convergence arguments. But the above argument is still relevant, because the convergence and ‘infinite wager’ arguments support each other. The reason 2) would be bolstered by the success of convergence arguments (in epistemology, or ethics, or any other normative domain) is that convergence arguments increase our confidence that normativity is a coherent concept—which is what 2) needs to work. It certainly seems coherent to me, but this cannot be taken as self-evident since various people have claimed that they or others don’t have the concept.I also think that 2) is some evidence in favour of moral realism, because it undermines some of the strongest antirealist arguments.
I don’t think this is right. What I’m giving here is such a ‘partners-in-crime’ argument with a structure, with epistemic facts at the base. Realism about normativity certainly should lower the burden of proof on moral realism to prove total convergence now, because we already have reason to believe normative facts exist. For most anti-realists, the very strongest argument is the ‘queerness argument’ that normative facts are incoherent or too strange to be allowed into our ontology. The ‘partners-in-crime’/‘infinite wager’ undermines this strong argument against moral realism. So some sort of very strong hint of a convergence structure might be good enough—depending on the details.
So, with all that out of the way, when we start discussing the convergence arguments, the burden of proof on them is not colossal. If we already have reason to suspect that there are normative facts out there, perhaps some of them are moral facts. But if we found a random morass of different considerations under the name ‘morality’ then we’d be stuck concluding that there might be some normative facts, but maybe they are only epistemic facts, with nothing else in the domain of normativity.I don’t think this is the case, but I will have to wait until your posts on that topic—I look forward to them!All I’ll say is that I don’t consider strongly conflicting intuitions in e.g. population ethics to be persuasive reasons for thinking that convergence will not occur. As long as the direction of travel is consistent, and we can mention many positive examples of convergence, the preponderance of evidence is that there are elements of our morality that reach high-level agreement. (I say elements because realism is not all-or-nothing—there could be an objective ‘core’ to ethics, maybe axiology, and much ethics could be built on top of such a realist core—that even seems like the most natural reading of the evidence, if the evidence is that there is convergence only on a limited subset of questions.) If Kant could have been a utilitarian and never realised it, then those who are appalled by the repugnant conclusion could certainly converge to accept it after enough ideal reflection!
How to make anti-realism existentially satisfying
If your goal here is to convince those inclined towards moral realism to see anti-realism as existentially satisfying, I would recommend a different framing of it. I think that framing morality as a ‘personal life goal’ makes it seem as though it is much more a matter of choice or debate than it in fact is, and will probably ring alarm bells in the mind of a realist and make them think of moral relativism.Speaking as someone inclined towards moral realism, the most inspiring presentations I’ve ever seen of anti-realism are those given by Peter Singer in The Expanding Circle and Eliezer Yudkowsky in his metaethics sequence. Probably not by coincidence—both of these people are inclined to be realists. Eliezer said as much, and Singer later became a realist after reading Parfit. Eliezer Yudkowsky on ‘The Meaning of Right’:
And Singer in the Expanding Circle:
These are both anti-realist claims. They define ‘right’ descriptively and procedurally as arising from what we would want to do under some ideal circumstances, and rigidifies on the output of that idealization, not on what we want. To a realist, this is far more appealing than a mere “personal, morally-inspired life goal”, and has the character of ‘external moral constraint’, even if it’s not really ultimately external, but just the result of immovable or basic facts about how your mind will, in fact work, including facts about how your mind finds inconsistencies in its own beliefs. This is a feature, not a bug:
The exact thing that Williams calls ‘alienating’ is the thing that Singer, Yudkowsky, Parfit and many other realists and anti-realists consider to be the most valuable thing about morality! But you can keep this ‘alienation’ if you reframe morality as being the result of the basic, deterministic operations of your moral reasoning, the same way you’d reframe epistemic or practical reasoning on the anti-realist view. Then it seems more ‘external’ and less relativistic.One thing this framing makes clearer, which you don’t deny but don’t mention, is that anti-realism does not imply relativism.
What we can say, on anti-realism as characterised by Singer and Yudkowsky, is that they are making an error of morality. We are not obligated (how could we be?) towards relativism, permissiveness or accepting values incompatible with our own on anti-realism. Ultimately, you can just say that ‘I am right and you are wrong’.That’s one of the major upsides of anti-realism to the realist—you still get to make universal, prescriptive claims and follow them through, and follow them through because they are morally right, and if people disagree with you then they are morally wrong and you aren’t obligated to listen to their arguments if they arise from fundamentally incompatible values. Put that way, anti-realism is much more appealing to someone with realist inclinations.
I appear to be accidentally writing a sequence on moral realism, or at least explaining what moral realists like about moral realism—for those who are perplexed about why it would be worth wanting or how anyone could find it plausible.
Many philosophers outside this community have an instinct that normative anti-realism (about any irreducible facts about what you should do) is self-defeating, because it includes a denial that there are any final, buck-stopping answers to why we should believe something based on evidence, and therefore no truly, ultimately impartial way to even express the claim that you ought to believe something. I think that this is a good, but not perfect, argument. My experience has been that traditional analytic philosophers find this sort of reasoning appealing, in part because of the legacy of how Kant tried to deduce the logically necessary preconditions for having any kind of judgement or experience. I don’t find it particularly appealing, but I think that there’s a case for it here, if there ever was.
Irreducible Normativity and Recursive Justification
On normative antirealism, what ‘you shouldn’t believe that 2+2=5’ really means is just that someone else’s mind has different basic operations to yours. It is obvious that we can’t stop using normative concepts, and couldn’t use the concept ‘should’ to mean ‘in accordance with the basic operations of my mind’, but this isn’t an easy case of reduction like Water=H20. There is a deep sense in which normative terms really can’t mean what we think they mean if normative antirealism is true. This must be accounted for by either a deep and comprehensive question-dissolving, or by irreducible normative facts.
This ‘normative indispensability’ is not an argument, but it can be made into one:
If you’ve read the sequences, you are not going to like this argument, at all—it sounds like the ‘zombie’ argument, and it sounds like someone asking for an exception to reductionism—which is just what it is. This is the alternative:
If it’s truly ‘logic all the way down’ and there are no ‘> tokens’ over particular functional arrangements of matter, including the ones you used to form your beliefs, then you have to give up on knowing reality as it is. This isn’t the classic sense in which we all have an ‘imperfect model’ of reality as it is. If you give up on irreducible epistemic facts you give up knowing anything, probabilistically or otherwise, about reality-as-it-is, because there are no fundamentally, objectively, mind-independent ways you should or shouldn’t form beliefs about external reality. So you can’t say you’re better than the pebble with ‘2+2=5’ written on it, except descriptively, in that the causal process that produced the pebble contradict the one that produced 2+2=4 in your brain.
What’s the alternative? If we don’t deny this consequence of normative antirealism, we have two options. One is the route of dissolving the question, by analogy with how reductionism has worked in the past, the other is to say that there are irreducible normative facts. In order to dissolve the question correctly, it needs to be in a way that shows a denial of epistemic facts isn’t damaging, doesn’t lead to epistemological relativism or scepticism. We can’t simply declare that normative facts can’t possibly exist—otherwise you’re vulnerable to the argument 2). David Chalmers talks about question-dissolving for qualia:
David Chalmers of all people says that, even if he can’t conceive of how a deep reduction of Qualia might make their non-existence non-paradoxical, he might change his mind if he ever actually saw such a reduction! I say the same about epistemic and therefore normative facts. But crucially, no-one has solved this ‘meta problem’ for Qualia or for normative facts. There are partial hints of explanations for both, but there’s no full debunking argument that makes epistemic antirealism seem completely non-damaging and thus removes 2). I can’t imagine what such an account could look like, but the point of the ‘dissolving the question’ strategy is that it often isn’t imaginable in advance because your concepts are confused, so I’ll just leave that point. In the moral domain, the convergence arguments point against question-dissolving because they suggest the concept of normativity is solid and reliable. If those arguments fall, then question-dissolving looks more likely.
That’s one route. What of the other?
The alternative is to say that there are irreducible normative facts. This is counter-reductionist, counter-intuitive and strange. Two things that can make it less strange: these facts are not supposed to be intrinsically motivational (that violates the orthogonality thesis and is not permitted by the laws of physics) and they are not required to be facts about objects, like Platonic forms outside of time and space. They can be logical facts of the sort Eliezer talked about, but just a particular kind of logical fact, that has the property of being normative, the one you should follow. They don’t need to ‘exist’ as such. What epistemic facts would do is say certain reflective equilibria, certain arrangements of ‘reflecting on your own beliefs, using your current mind’ are the right ones, and others are the wrong ones. It doesn’t deny that this is the case:
Irreducible normativity just says that there is a meaningful, mind-independent difference between the virtuous and degenerate cases of recursive justification of your beliefs, rather than just ways of recursively justifying our beliefs that are… different.
If you buy that anti-realism is self-defeating, and think that we can know something about the normative domain via moral and non-moral convergence, then we have actual positive reasons to believe that normative facts are knowable (the convergence arguments help establish that moral facts aren’t and couldn’t be random things like stacking pebbles in prime-numbered heaps).
These two arguments are quite different—one is empirical (that our practical, epistemic and moral reasons tend towards agreement over time and after conceptual analysis and reflective justification) and the other is conceptual (that if you start out with normative concepts you are forced into using them).
Depending on which of the arguments you accept, there are four basic options. These are extremes of a spectrum, as while the Normativity argument is all-or-nothing, the Convergence argument can come by degrees for different types of normative claims (epistemic, practical and moral):
• Accept Convergence and Reject Normativity: prescriptivist anti-realism. There are (probably) no mind-independent moral facts, but the nature of rationality is such that our values usually cohere and are stable, so we can treat morality as a more-or-less inflexible logical ordering over outcomes.
• Accept Convergence and Accept Normativity: moral realism. There are moral facts and we can know them
• Reject Convergence and Reject Normativity: nihilist anti-realism. Morality is seen as a ‘personal life project’ about which we can’t expect much agreement or even within-person coherence
• Reject Convergence and Accept Normativity: sceptical moral realism. Normative facts exist, but moral facts may not exist, or may be forever unknowable.
Even if what exactly normative facts are is hard to conceive, perhaps we can still know some things about them. Eliezer ended his post arguing for universalized, prescriptive anti-realism with a quote from HPMOR. Here’s a different quote:
Prescriptive Anti-realism
An extremely unscientific and incomplete list of people who fall into the various categories I gave in the previous post:
1. Accept Convergence and Reject Normativity: Eliezer Yudkowsky, Sam Harris (Interpretation 1), Peter Singer in The Expanding Circle, RM Hare and similar philosophers, HJPEV
2. Accept Convergence and Accept Normativity: Derek Parfit, Sam Harris (Interpretation 2), Peter Singer today, the majority of moral philosophers, Dumbledore
3. Reject Convergence and Reject Normativity: Robin Hanson, Richard Ngo (?), Lucas Gloor (?) most Error Theorists, Quirrell
4. Reject Convergence and Accept Normativity: A few moral philosophers, maybe Ayn Rand and objectivists?
The difference in practical, normative terms between 2), 4) and 3) is clear enough − 2 is a moral realist in the classic sense, 4 is a sceptic about morality but agrees that irreducible normativity exists, and 3 is a classic ‘antirealist’ who sees morality as of a piece with our other wants. What is less clear is the difference between 1) and 3). In my caricature above, I said Quirrell and Harry Potter from HPMOR were non-prescriptive and prescriptive anti-realists, respectively, while Dumbledore is a realist. Here is a dialogue between them that illustrates the difference.
The relevant issue here is that Harry draws a distinction between moral and non-moral reasons even though he doesn’t believe in irreducible normativity. In particular, he’s committed to a normative ethical theory, preference utilitarianism, as a fundamental part of how he values things.
Here is another illustration of the difference. Lucas Gloor (3) explains the case for suffering-focussed ethics, based on the claim that our moral intuitions assign diminishing returns to happiness vs suffering.
And what are those difficulties mentioned? The most obvious is the absurd conclusion—that scaling up a population can turn it from axiologically good to bad:
Here, then, is the difference—If you believe, as a matter of fact, that our values cohere and place fundamental importance on coherence, whether because you think that is the way to get at the moral truth (2) or because you judge that human values do cohere to a large degree for whatever other reason and place fundamental value on coherence (1), you will not be satisfied with leaving your moral theory inconsistent. If, on the other hand, you see morality as continuous with your other life plans and goals (3), then there is no pressure to be consistent. So to (3), focussing on suffering-reduction and denying the absurd conclusion is fine, but this would not satisfy (1).
I think that, on closer inspection, (3) is unstable—unless you are Quirrell and explicitly deny any role for ethics in decision-making, we want to make some universal moral claims. The case for suffering-focussed ethics argues that the only coherent way to make sense of many of our moral intuitions is to conclude a fundamental asymmetry between suffering and happiness, but then explicitly throws up a stop sign when we take that argument slightly further—to the absurd conclusion, because ‘the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled’. Why begin the project in the first place, unless you place strong terminal value on coherence (1)/(2) - in which case you cannot arbitrarily halt it.
I agree with that.
It sounds like your contrasting my statement from The Case for SFE (“fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms”) with “arbitrarily halting the search for coherence” / giving up on ethics playing a role in decision-making. But those are not the only two options: You can have some universal moral principles, but leave a lot of population ethics underdetermined. I sketched this view in this comment. The tl;dr is that instead of thinking of ethics as a single unified domain where “population ethics” is just a straightforward extension of “normal ethics,” you split “ethics” into a bunch of different subcategories:
Preference utilitarianism as an underdetermined but universal morality
“What is my life goal?” as the existentialist question we have to answer for why we get up in the morning
“What’s a particularly moral or altruistic thing to do with the future lightcone?” as an optional subquestion of “What is my life goal?” – of interest to people who want to make their life goals particularly altruistically meaningful
I think a lot of progress in philosophy is inhibited because people use underdetermined categories like “ethics” without making the question more precise.
This is very interesting—I recall from our earlier conversation that you said you might expect some areas of agreement, just not on axiology:
This may seem like an odd question, but, are you possibly a normative realist, just not a full-fledged moral realist? What I didn’t say in that bracket was that ‘maybe axiology’ wasn’t my only guess about what the objective, normative facts at the core of ethics could be.
Following Singer in the expanding circle, I also think that some impartiality rule that leads to preference utilitarianism, maybe analogous to the anonymity rule in social choice, could be one of the normatively correct rules that ethics has to follow, but that if convergence among ethical views doesn’t occur the final answer might be underdetermined. This seems to be exactly the same as your view, so maybe we disagree less than it initially seemed.
In my attempted classification (of whether you accept convergence and/or irreducible normativity), I think you’d be somewhere between 1 and 3. I did say that those views might be on a spectrum depending on which areas of Normativity overall you accept, but I didn’t consider splitting up ethics into specific subdomains, each of which might have convergence or not:
Assuming that it is possible to cleanly separate population ethics from ‘preference utilitarianism’, it is consistent, though quite counterintuitive, to demand reflective coherence in our non-population ethical views but allow whatever we want in population ethics (this would be view 1 for most ethics but view 3 for population ethics).
(This still strikes me as exactly what we’d expect to see halfway to reaching convergence—the weirder and newer subdomain of ethics still has no agreement, while we have reached greater agreement on questions we’ve been working on for longer.)
Your case for SFE was intended to defend a view of population ethics—that there is an asymmetry between suffering and happiness. If we’ve decided that ‘population ethics’ is to remain undetermined, that is we adopt view 3 for population ethics, what is your argument (that SFE is an intuitively appealing explanation for many of our moral intuitions) meant to achieve? Can’t I simply declare that my intuitions say different, and then we have nothing more to discuss, if we already know we’re going to leave population ethics undetermined?
I’m not sure. I have to read your most recent comments on the EA forum more closely. If I taboo “normative realism” and just describe my position, it’s something like this:
I confidently believe that human expert reasoners won’t converge on their life goals and their population ethics even after philosophical reflection under idealized conditions. (For essentially the same reasons: I think it’s true that if “life goals don’t converge” then “population ethics also doesn’t converge”)
However, I think there would likely be converge on subdomains/substatements of ethics, such as “preference utilitarianism is a good way to view some important aspects of ‘ethics’”
I don’t know if the second bullet point makes me a normative realist. Maybe it does, but I feel like I could make the same claim without normative concepts. (I guess that’s allowed if I’m a naturalist normative realist?)
Cool! I personally wouldn’t call it “normatively correct rule that ethics has to follow,” but I think it’s something that sticks out saliently in the space of all normative considerations.
Okay, but isn’t it also what you’d expect to see if population ethics is inherently underdetermined? One intuition is that population ethics takes out learned moral intuitions “off distribution.” Another intuition is that it’s the only domain in ethics where it’s ambiguous what “others’ interests” refers to. I don’t think it’s an outlandish hypothesis that population ethics is inherently underdetermined. If anything, it’s kind of odd that anyone thought there’d be an obviously correct solution to this. As I note in the comment I linked to in my previous post, there seems to be an interesting link between “whether population ethics is underdetermined” and “whether every person should have the same type of life goal.” I think “not every person should have the same type of life goal” is a plausible position even just intuitively. (And I have some not-yet-written-out arguments why it seems clearly the correct stance to me, mostly based on my own example. I think about my life goals in a way that other clearthinking people wouldn’t all want to replicate, and I’m confident that I’m not somehow confused about what I’m doing.)
Exactly! :) That’s why I called my sequence a sequence on moral anti-realism. I don’t think suffering-focused ethics is “universally correct.” The case for SFE is meant in the following way: As far as personal takes on population ethics go, SFE is a coherent attractor. It’s a coherent and attractive morality-inspired life goal for people who want to devote some of their caring capacity to what happens to earth’s future light cone.
Side note: This framing is also nice for cooperation. If you think in terms of all-encompassing moralities, SFE consequentialism and non-SFE consequentialism are in tension. But if population ethics is just a subdomain of ethics, then the tension is less threatening. Democrats and Republicans are also “in tension,” worldview-wise, but many of them also care – or at least used to care – about obeying the norms of the overarching political process. Similarly, I think it would be good if EA moved toward viewing people with suffering-focused versus not-suffering-focused population ethics as “not more in tension than Democrats versus Republicans.” This would be the natural stance if we started viewing population ethics as a morality-inspired subdomain of currently-existing people thinking about their life goals (particularly with respect to “what do we want to do with earth’s future lightcone”). After you’ve chosen your life goals, that still leaves open the further question “How do you think about other people having different life goals from yours?” That’s where preference utilitarianism comes in (if one takes a strong stance on how much to respect others’ interests) or where we can refer to “norms of civil society” (weaker stance on respect; formalizable with contractualism that has a stronger action-omission distinction than preference utilitarianism). [Credit to Scott Alexander’s archipelago blogpost for inspiring this idea. I think he also had a blogpost on “axiology” that made a similar point, but by that point I might have already found my current position.]
In any case, I’m considering changing all my framings from “moral anti-realism” to “morality is underdetermined.” It seems like people understand me much faster if I use the latter framing, and in my head it’s the same message.
---
As a rough summary, I think the most EA-relevant insights from my sequence (and comment discussions under the sequence posts) are the following:
1. Morality could be underdetermined
2. Moral uncertainty and confidence in strong moral realism are in tension
3. There is no absolute wager for moral realism
(Because assuming idealized reasoning conditions, all reflectively consistent moral opinions are made up of the same currency. That currency – “what we on reflection care about” – doesn’t suddenly lose its significance if there’s less convergence than we initially thought. Just like I shouldn’t like the taste of cilantro less once I learn that it tastes like soap to many people, I also shouldn’t care less about reducing future suffering if I learn that not everyone will find this the most meaningful thing they could do with their lives.)
4. Mistaken metaethics can lead to poorly grounded moral opinions
(Because people may confuse moral uncertainty with having underdetermined moral values, and because morality is not a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on.)
5. When it comes to moral questions, updating on peer disagreement doesn’t straightforwardly make sense
(Because it matters whether the peers share your most fundamental intuitions and whether they carve up the option space in the same way as you. Regarding the latter, someone who never even ponders the possibility of treating population ethics separately from the rest of ethics isn’t reaching a different conclusion on the same task. Instead, they’re doing a different task. I’m interested in all the three questions I dissolved ethics into, whereas people who play the game “pick your version of consequentialism and answer every broadly-morality-related question with that” are playing a different game. Obviously that framing is a bit of a strawman, but you get the point!)
I’m here from your comment on Lukas’ post on the EA Forum. I haven’t been following the realism vs anti-realism discussion closely, though, just kind of jumped in here when it popped up on the EA Forum front page.
Are there good independent arguments against the absurd conclusion? It’s not obvious to me that it’s bad. Its rejection is also so close to separability/additivity that for someone who’s not sold on separability/additivity, an intuitive response is “Well ya, of course, so what?”. It seems to me that the absurd conclusion is intuitively bad for some only because they have separable/additive intuitions in the first place, so it almost begs the question against those who don’t.
By deny, do you mean reject? Doesn’t negative utilitarianism work? Or do you mean incorrectly denying that the absurd conclusion doesn’t follow from diminishing returns to happiness vs suffering?
Also, for what it’s worth, my view is that a symmetric preference consequentialism is the worst way to do preference consequentialism, and I recognize asymmetry as a general feature of ethics. See these comments:
[1]
[2]
[3]