Realism and Rationality
Format warning: This post has somehow ended up consisting primarily of substantive endnotes. It should be fine to read just the (short) main body without looking at any of the endnotes, though. The endnotes elaborate on various claims and distinctions and also include a much longer discussion of decision theory.
Thank you to Pablo Stafforini, Phil Trammell, Johannes Treutlein, and Max Daniel for comments on an initial draft. I have also slightly edited the post since I first published it, to try to make a few points clearer.
When discussing normative questions, it is not uncommon for members of the rationalist community to identify as anti-realists. But normative anti-realism seems to me to be in tension with some of the community’s core interests, positions, and research activities. In this post I suggest that the cost of rejecting realism may be larger than is sometimes recognized. [1]
1. Realism and Anti-Realism
Everyone is, at least sometimes, inclined to ask: “What should I do?”
We ask this question when we’re making a decision and it seems like there are different considerations to be weighed up. You might be considering taking a new job in a new city, for example, and find yourself wondering how to balance your preferences with those of your significant other. You might also find yourself thinking about whether you have any obligation to do impactful work, about whether it’s better to play it safe or take risks, about whether it’s better to be happy in the moment or to be able to look back with satisfaction, and so on. It’s almost inevitable that in a situation like this you will find yourself asking “What should I do?” and reasoning about it as though the question has an answer you can approach through a certain kind of directed thought.[2]
But it’s also conceivable that this sort of question doesn’t actually have an answer. Very roughly, at least to certain philosophers, realism is a name for the view that there are some things that we should do or think. Anti-realism is a name for the view that there are not.[3][4][5][6]
2. Anti-Realism and the Rationality Community
In discussions of normative issues, it seems not uncommon for members of the rationalist community to identify as “anti-realists.” Since people in different communities can obviously use the same words to mean different things, I don’t know what fraction of rationalists have the same thing in mind when they use the term “anti-realism.”
To the extent people do have the same thing in mind, though, I find anti-realism hard to square with a lot of other views and lines of research that are popular within the community. A few main points of tension stand out to me.
2.1 Normative Uncertainty
One first point of tension is the community’s relatively strong interest in the subject of normative uncertainty. At least as it’s normally discussed in the philosophy literature, normative uncertainty is uncertainty about normative facts that bear on what we should do. If we assume that anti-realism is true, though, then we are assuming that there are no such facts. It seems to me like a committed anti-realist could not be in a state of normative uncertainty.
It may still be the case, as Sepielli (2012) suggests, that a committed anti-realist can experience psychological states that are interestingly structurally analogous to states of normative uncertainty. However, Bykvist and Olson (2012) disagree (in my view) fairly forcefully, and Sepielli is in any case clear that: “Strictly speaking, there cannot be such a thing as normative uncertainty if non-cognitivism [the dominant form of anti-realism] is true.”[7]
2.2 Strongly Endorsed Normative Views
A second point of tension is the existence of a key set of normative claims that a large portion of the community seems to treat as true.
One of these normative claims is the Bayesian claim that we ought to have degrees of belief in propositions that are consistent with the Kolmogorov probability axioms and that are updated in accordance with Bayes’ rule. It seems to me like very large portions of the community self-identify as Bayesians and regard other ways of assigning and updating degrees of belief in propositions as not just different but incorrect.
Another of these normative claim is the subjectivist claim that we should do whatever would best fulfill some version of our current preferences. To learn what we should do, on this view, the main thing is to introspect about our own preferences.[8] Whether or not a given person should commit a violent crime, for instance, depends purely on whether they want to commit the crime (or perhaps on whether they would want to commit it if they went through some particular process of reflection).
A further elaboration on this claim is that, when we are uncertain about the outcomes of our actions, we should more specifically act to maximize the expected fulfillment of our desires. We should consider the different possible outcomes of each action, assign them probabilities, assign them desirability ratings, and then use the expected value formula to rate the overall goodness of the action. Whichever action has the best overall rating is the one we should take.
One possible way of squaring an endorsement of anti-realism with an apparent endorsement of these normative claims is to argue that people don’t actually have normative claims in mind when they write and talk about these issues. Non-cognitivists—a particular variety of anti-realists—argue that many utterances that seem at first glance like claims about normative facts are in fact nothing more than expressions of attitudes. For instance, an emotivist—a further sub-variety of non-cognitivist—might argue that the sentence “You should maximize the expected fulfillment of your current desires!” is simply a way of expressing a sense of fondness toward this course of action. The sentence might be cached out as being essentially equivalent in content to the sentence, “Hurrah, maximizing the expected fulfillment of your current desires!”
Although a sizeable portion of philosophers are non-cognitivists, I generally don’t find it very plausible as a theory of what people are trying to do when they seem to make normative claims.[9] In this case it doesn’t feel to me like most members of the rationalist community are just trying to describe one particular way of thinking and acting, which they happen to prefer to others. It seems to me, rather, that people often talk about updating your credences in accordance with Bayes’ rule and maximizing the expected fulfillment of your current desires as the correct things to do.
One more thing that stands out to me is that arguments for anti-realism often seem to be presented as though they implied (rather than negated) the truth of some of these normative claims. For example, the popular “Replacing Guilt” sequence on Minding Our Way seems to me to repeatedly attack normative realism. It rejects the idea of “shoulds” and points out that there aren’t “any oughtthorities to ordain what is right and what is wrong.” But then it seems to draw normative implications out of these attacks: among other implications, you should “just do what you want.” At least taken at face value, this line of reaoning wouldn’t be valid. It makes no more sense than reaoning that, if there are no facts about what we should do, then we should “just maximize total hedonistic well-being” or “just do the opposite of what we want” or “just open up souvenir shops.” Of course, though, there’s a good chance that I’m misunderstanding something here.
2.3 Decision Theory Research
A third point of tension is the community’s engagement with normative decision theory research. Different normative decision theories pick out different necessary conditions for an action to be the one that a given person should take, with a focus on how one should respond to uncertainty (rather than on what ends one should pursue).[10][11]
A typical version of CDT says that the action you should take at a particular point in time is the one that would cause the largest expected increase in value (under some particular framework for evaluating causation). A typical version of EDT says that the action you should take at a particular point in time is the one that would, once you take it, allow you to rationally expect the most value. There are also alternative versions of these theories—for instance, versions using risk-weighted expected value maximization or the criterion of stochastic dominance—that break from the use of pure expected value.
I’ve pretty frequently seen it argued within the community (e.g. in the papers “Cheating Death in Damascus” and “Functional Decision Theory”) that CDT and EDT are not “correct” and that some other new theory such as functional decision theory is. But if anti-realism is true, then no decision theory is correct.
Eliezier Yudkowsky’s influential early writing on decision theory seems to me to take an anti-realist stance. It suggests that we can only ask meaningful questions about the effects and correlates of decisions. For example, in the context of the Newcomb thought experiment, we can ask whether one-boxing is correlated with winning more money. But, it suggests, we cannot take a step further and ask what these effects and correlations imply about what it is “reasonable” for an agent to do (i.e. what they should do). This question—the one that normative decision theory research, as I understand it, is generally about -- is seemingly dismissed as vacuous.
If this apparently anti-realist stance is widely held, then I don’t understand why the community engages so heavily with normative decision theory research or why it takes part in discussions about which decision theory is “correct.” It strikes me a bit like an atheist enthustiastically following theological debates about which god is the true god. But I’m mostly just confused here.[12][13]
3. Sympathy for Realism
I wouldn’t necessarily describe myself as a realist. I get that realism is a weird position. It’s both metaphysically and epistemologically suspicious. What is this mysterious property of “should-ness” that certain actions are meant to possess—and why would our intuitions about which actions possess it be reliable?[14][15]
But I am also very sympathetic to realism and, in practice, tend to reason about normative questions as though I was a full-throated realist. My sympathy for realism and tendency to think as a realist largely stems from my perception that if we reject realism and internalize this rejection then there’s really not much to be said or thought about anything. We can still express attitudes at one another, for example suggesting that we like certain actions or credences in propositions better than others. We can present claims about the world, without any associated explicit or implicit belief that others should agree with them or respond to them in any particular way. And that seems to be about it.
Furthermore, if anti-realism is true, then it can’t also be true that we should believe that anti-realism is true. Belief in anti-realism seems to undermine itself. Perhaps belief in realism is self-undermining in a similar way—if seemingly correct reasoning leads us to account for all the ways in which realism is a suspect position—but the negative feedback loop in this case at least seems to me to be less strong.[16]
I think that realism warrants more respect than it has historically received in the rationality community, at least relative to the level of respect it gets from philosophers.[17] I suspect that some of this lack of respect might come from a relatively weaker awareness of the cost of rejecting realism or of the way in which belief in anti-realism appears to undermine itself.
- ↩︎
I’m basing my the views I express in this post primarily off Derek Parfit’s writing, specifically his book On What Matters. For this reason, it seems pretty plausible to me that there are some important points I’ve missed by reading too narrowly. In addition, it also seems likely that some of the ways in which I talk about particular issues around normativity will sound a bit foreign or just generally “off” to people who are highly familiar with some of these issues. One unfortunate reason for this is that the study of normative questions and of the nature of normativity seems to me to be spread out pretty awkwardly across the field of philosophy, with philosophers in different sub-disciplines often discussing apparently interconnected questions in significant isolation of one another while using fairly different terminology. This means that (e.g.) meta-ethics and decision theory are seldom talked about at the same time and are often talked about in ways that make it difficult to see how they fit together. A major reason I am leaning on Parfit’s work is that he is—to my knowledge—one of relatively few philosophers to have tried to approach questions around normativity through a single unified framework.
- ↩︎
This is a point that is also discussed at length in David Enoch’s book Taking Morality Seriously (pgs. 70-73):
Perhaps...we are essentially deliberative creatures. Perhaps, in other words, we cannot avoid asking ourselves what to do, what to believe, how to reason, what to care about. We can, of course, stop deliberating about one thing or another, and it’s not as if all of us have to be practical philosophers (well, if you’re reading this book, you probably are, but you know what I mean). It’s opting out of the deliberative project as a whole that may not be an option for us….
[Suppose] law school turned out not to be all you thought it would be, and you no longer find the prospects of a career in law as exciting as you once did. For some reason you don’t seem to be able to shake off that old romantic dream of studying philosophy. It seems now is the time to make a decision. And so, alone, or in the company of some others you find helpful in such circumstances, you deliberate. You try to decide whether to join a law firm, apply to graduate school in philosophy, or perhaps do neither.
The decision is of some consequence, and so you resolve to put some thought into it. You ask yourself such questions as: Will I be happy practicing law? Will I be happier doing philosophy? What are my chances of becoming a good lawyer? A good philosopher? How much money does a reasonably successful lawyer make, and how much less does a reasonably successful philosopher make? Am I, so to speak, more of a philosopher or more of a lawyer? As a lawyer, will I be able to make a significant political difference? How important is the political difference I can reasonably expect to make? How important is it to try and make any political difference? Should I give any weight to my father’s expectations, and to the disappointment he will feel if I fail to become a lawyer? How strongly do I really want to do philosophy? And so on. Even with answers to most – even all – of these questions, there remains the ultimate question. “All things considered”, you ask yourself, “what makes best sense for me to do? When all is said and done, what should I do? What shall I do?”
When engaging in this deliberation, when asking yourself these questions, you assume, so it seems to me, that they have answers. These answers may be very vague, allow for some indeterminacy, and so on. But at the very least you assume that some possible answers to these questions are better than others. You try to find out what the (better) answers to these questions are, and how they interact so as to answer the arch-question, the one about what it makes most sense for you to do. You are not trying to create these answers. Of course, in an obvious sense what you will end up doing is up to you (or so, at least, both you and I are supposing here). And in another, less obvious sense, perhaps the answer to some of these questions is also up to you. Perhaps, for instance, how happy practicing law will make you is at least partly up to you. But, when trying to make up your mind, it doesn’t feel like just trying to make an arbitrary choice. This is just not what it is like to deliberate. Rather, it feels like trying to make the right choice. It feels like trying to find the best solution, or at least a good solution, or at the very least one of the better solutions, to a problem you’re presented with. What you’re trying to do, it seems to me, is to make the decision it makes most sense for you to make. Making the decision is up to you. But which decision is the one it makes most sense for you to make is not. This is something you are trying to discover, not create. Or so, at the very least, it feels like when deliberating.
- ↩︎
Specifically, the two relevant views can be described as realism and anti-realism with regard to “normativity.” We can divide the domain of “normativity” up into the domains of “practical rationality,” which describes what actions people should take, and “epistemic rationality,” which describes which beliefs or degrees of belief people should hold. The study of ethics, decision-making under uncertainty, and so on can then all be understood as sub-components of the study of practical rationality. For example, one view on the study of ethics is that it is the study of how factors other than one’s own preferences might play roles in determining what actions one should take. It should be noted that terminology varies very widely though. For example, different authors seem to use the word “ethics” more or less inclusively. The term “moral realism” also sometimes means roughly the same thing as “normative realism,” as I’ve defined it here, and sometimes picks out a more specific position.
- ↩︎
An an edit to the initial post, I think it’s probably worth saying more about the concept of “moral realism” in relation to “normative realism.” Depending on the context, “moral realism” might be taken to refer to: (a) normative realism, (b) realism about practical rationality (not just epistemic rationality), (c) realism about practical rationality combined with the object-level belief that people should do more than just try to satisfy their own personal preferences, or (d) something else in this direction.
One possible reason the term lacks a consensus definition is that, perhaps surprisingly, many contemporary “moral realists” aren’t actually very preocuppied with the concept of “morality.” Popular books like Taking Morality Seriously, On What Matters, and The Normative Web spend most of their energy defending normative realism, more broadly, and my impression is that their critics spend most of their energy attacking normative realism more broadly. One reason for this shift in focus toward normative realism is the realization that, on almost any conception of “moral realism,” nearly all of the standard metaphysical and epistemological objections to “moral realism” also apply just as well to normative realism in general. Another reason is that any possible distinction between moral and normative-but-not-moral facts doesn’t seem like it could have much practical relevance: If we know that we should make some decision, then we know that we should take it; we have no obvious additional need to know or care whether this normative fact warrants the label “moral fact” or not. Here, for example, is David Enoch, in Taking Morality Seriously, on the concept of morality (pg. 86):
What more...does it take for a normative truth (or falsehood) to qualify as moral? Morality is a particular instance of normativity, and so we are now in effect asking about its distinctive characteristics, the ones that serve to distinguish between the moral and the rest of the normative. I do not have a view on these special characteristics of the moral. In fact, I think that for most purposes this is not a line worth worrying about. The distinction within the normative between the moral and the non-moral seems to me to be shallow compared to the distinction between the normative and the non-normative—both philosophically, and, as I am about to argue, practically. (Once you know you have a reason to X and what this reason is, does it really matter for your deliberation whether it qualifies as a moral reason?)
- ↩︎
There are two major strands of anti-realism. Error theory (sometimes equated with “nihilism”) asserts that all claims that people should do particular things or refrain from doing particular things are false. Non-cognitivism asserts that utterances of the form “A should do X” typically cannot even really be understood as claims; they’re not the sort of thing that could be true or false.
- ↩︎
In this post, for simplicity, I’m talking about normativity using binary language. Either it’s the case that you “should” take an action or it’s not the case that you “should” take it. But we might also talk in less binary terms. For example, there may be some actions that you merely have “more reason” to take than others.
- ↩︎
In Sepielli’s account, for example, the experience of feeling extremely in favor of blaming someone a little bit for taking an action X is analogous to the experience of being extremely confident that it is a little bit wrong to take action X. This account is open to at least a few objections, such as the objection that degrees of favorability don’t—at least at first glance—seem to obey the standard axioms of probability theory. Even if we do accept the account, though, I still feel unclear about the proper method and justification for converting debates around normative uncertainty into debates around these other kinds of psychological states.
- ↩︎
If my memory is correct, one example of a context in which I have encountered this subjectivist viewpoint is in a CFAR workshop. One lesson instructs attendees that if it seems like they “should” do something, but then upon reflection they realize they don’t want to do it, then it’s not actually true that they should do it.
- ↩︎
The PhilPapers survey suggests that about a quarter of both normative ethicists and applied ethicists also self-identify as anti-realists, with the majority of them presumably leaning toward non-cognitivism over error theory. It’s still an active matter of debate whether non-cognitivists have sensible stories about what people are trying to do when they seem to be discussing normative claims. For example, naive emotivist theories stumble in trying to explain sentences like: “It’s not true that either you should do X or you should do Y.”
- ↩︎
There is also non-normative research that falls under the label “decision theory,” which focuses on exploring the ways in which people do in practice make decisions or neutrally exploring the implications of different assumptions about decision-making processes.
- ↩︎
Arguably, even in academic literature, decision theories are often discussed under the implicit assumption that some form of subjectivism is true. However, it is also very easy to modify the theories to be compatible with theories that tell you to take into account things beyond your current desires. Value might be equated with one’s future welfare, for example, or with the total future welfare of all conscious beings.
- ↩︎
One thing that makes this issue a bit complicated is that rationalist community writing on decision theory sometimes seems to switch back and forth between describing decision theories as normative claims about decisions (which I believe is how academic philosophers typically describe decision theories) and as algorithms to be used (which seems to be inconsistent with how academic philosophers typically describe decision theories). I think this tendency to switch back and forth between describing decision theories in these two distinct ways can be seen both in papers proposing new decision theories and in online discussions. I also think this switching tendency can make things pretty confusing. Although it makes sense to discuss how an algorithm “performs” when “implemented,” once we specify a sufficiently precise performance metric, it does not seem to me to make sense to discuss the performance of a normative claim. I think the tendency to blur the distinction between algorithms and normative claims—or, as Will MacAskill puts it in his recent and similar critique, between “decision procedures” and “criteria of rightness”—partly explains why proponents of FDT and other new decision theories have not been able to get much traction with academic decision theorists. For example, causal decision theorists are well aware that people who always takes the actions that CDT says they should take will tend to fare less well in Newcomb scenarios than people who always take the actions that EDT says they should take. Causal decision theorists are also well aware that that there are some scenarios—for example, a Newcomb scenario with a perfect predictor and the option to get brain surgery to pre-commit yourself to one-boxing—in which there is no available sequence of actions such that CDT says you should take each of the actions in the sequence. If you ask a causal decision theorist what sort of algorithm you should (according to CDT) put into an AI system that will live in a world full of Newcomb scenarios, if the AI system won’t have the opportunity to self-modify, then I think it’s safe to say a causal decision theorist won’t tell you to put in an algorithm that only produces actions that CDT says it should take. This tells me that we really can’t fluidly switch back and forth between making claims about the correctness of normative principles and claims about the performance of algorithms, as though there were a canonical one-to-one mapping between these two sorts of claims. Insofar as rationalist writing on decision theory tends to do this sort of switching, I suspect that it contributes to confusion on the part of many academic readers. See also this blog post by an academic decision theorist, Wolfgang Schwarz, for a much more thorough perspective on why proponents of FDT may be having difficulty getting traction within the academic decision theory community.
- ↩︎
A similar concern also leads me to assign low (p<10%) probability to normative decision theory research ultimately being useful for avoiding large-scale accidental harm caused by AI systems. It seems to me like the question “What is the correct decision theory?” only has an answer if we assume that realism is true. But even if we assume that realism is true, we are now asking a normative question (“What criterion determines whether an action is one an agent ‘should’ take?”) as a way of trying to make progress on a non-normative question (“What approaches to designing advanced AI systems result in unintended disasters and which do not?”). Proponents of CDT and proponents of EDT do not actually disagree on how any given agent will behave, on what the causal outcome of assigning an agent a given algorithm will be, or on what evidence might be provided by the choice to assign an agent a given algorithm; they both agree, for example, about how much money different agents will tend to earn in the classic Newcomb scenario. What decision theorists appear to disagree about is a seperate normative question that floats above (or rather “supervenes” upon) questions about observed behavior or questions about outcomes. I don’t see how answering this normative question could help us much in answering the non-normative question of what approaches to designing advanced AI systems don’t (e.g.) result in global catastrophe. Put another way, my concern is that the strategy here seems to rely on the hope that we can derive an “is” from an “ought.”
However, in keeping with the above endnote, community work on decision theory only sometimes seems to be pitched (as it is in the abstract of this paper) as an exploration of normative principles. It is also sometimes pitched as an exploration of how different “algorithms” “perform” across relevant scenarios. This exploration doesn’t seem to me to have any direct link to the core academic decision theory literature and, given a sufficiently specific performance metric, does not seem to be inherently normative. I’m actually more optimistic, then, about this line of research having implications for AI development. Nonetheless, for reasons similar to the ones described in the post “Decision Theory Anti-Realism,” I’m still not very optimistic. In the cases that are being considered, the answer to the question “Which algorithm performs best?” will depend on subtle variations in the set of counterfactuals we consider when judging performance; different algorithms come out on top for different sets of counterfactuals. For example, in a prisoner’s dilemma, the best-performing algorithm will vary depending on whether we are imaging a counterfactual world where just one agent was born with a different algorithm or a counterfactual world where both agents were born with different algorithms. It seems unclear to me where we go from here except perhaps to list several different sets of imaginary counterfactuals and note which algorithms perform best relative to them.
Wolfgang Schwarz and Will MacAskill also make similar points, regarding the sensitivity of comparisons of algorithmic performance, in their essays on FDT. Schwarz writes:
Yudkowsky and Soares constantly talk about how FDT “outperforms” CDT, how FDT agents “achieve more utility”, how they “win”, etc. As we saw above, it is not at all obvious that this is true. It depends, in part, on how performance is measured. At one place, Yudkowsky and Soares are more specific. Here they say that “in all dilemmas where the agent’s beliefs are accurate [??] and the outcome depends only on the agent’s actual and counterfactual behavior in the dilemma at hand—reasonable constraints on what we should consider “fair” dilemmas—FDT performs at least as well as CDT and EDT (and often better)”. OK. But how we should we understand “depends on … the dilemma at hand”? First, are we talking about subjunctive or evidential dependence? If we’re talking about evidential dependence, EDT will often outperform FDT. And EDTers will say that’s the right standard. CDTers will agree with FDTers that subjunctive dependence is relevant, but they’ll insist that the standard Newcomb Problem isn’t “fair” because here the outcome (of both one-boxing and two-boxing) depends not only on the agent’s behavior in the present dilemma, but also on what’s in the opaque box, which is entirely outside her control. Similarly for all the other cases where FDT supposedly outperforms CDT. Now, I can vaguely see a reading of “depends on … the dilemma at hand” on which FDT agents really do achieve higher long-run utility than CDT/EDT agents in many “fair” problems (although not in all). But this is a very special and peculiar reading, tailored to FDT. We don’t have any independent, non-question-begging criterion by which FDT always “outperforms” EDT and CDT across “fair” decision problems.
MacAskill writes:
[A]rguing that FDT does best in a class of ‘fair’ problems, without being able to define what that class is or why it’s interesting, is a pretty weak argument. And, even if we could define such a class of cases, claiming that FDT ‘appears to be superior’ to EDT and CDT in the classic cases in the literature is simply begging the question: CDT adherents claims that two-boxing is the right action (which gets you more expected utility!) in Newcomb’s problem; EDT adherents claims that smoking is the right action (which gets you more expected utility!) in the smoking lesion. The question is which of these accounts is the right way to understand ‘expected utility’; they’ll therefore all differ on which of them do better in terms of getting expected utility in these classic cases.
- ↩︎
In my view, the epistemological issues are the most severe ones. I think Sharon Street’s paper A Darwinian Dilemma for Realist Theories of Value, for example, presents an especially hard-to-counter attack on the realist position on epistemological grounds. She argues that, in the light of the view that our brains evolved via natural selection, and natural selection did not and could not have directly selected for the accuracy of our normative intuitions, it is extremely difficult to construct a compelling explanation for why our normative intuitions should be correlated in any way with normative facts. This technically leave open the possibility of there being non-trivial normative facts, without us having any way of perceiving or intuiting them, but this state of affairs would strike most people as absurd. Although some realists, including Parfit, have attempted to counter Street’s argument, I’m not aware of anyone who I feel has truly succeeded. Street’s argument pretty much just seems to work to me.
- ↩︎
These metaphysical and epistemological issues become less concerning if we accept some version of “naturalist realism” which asserts that all normative claims can be reduced into claims about the natural world (i.e. claims about physical and psychological properties) and therefore tested in roughly the same way we might test any other claim about the natural world. However, this view seems wrong to me.
The bluntest objection to naturalist realism is what’s sometimes called the “just-too-different” objection. This is the objection that, to many and perhaps most people, normative claims are just obviously a different sort of claim. No one has ever felt any inclination to evoke an “is/is-made-of-wood divide” or an “is/is-illegal-in-Massachusetts divide,” because the property of being made of wood and the propery of being illegal in Massachusetts are obviously properties of the standard (natural) kind. But references to the “is/ought divide”—or, equivalently, the distinction between the “positive” and the “normative”—are commonplace and don’t typically provoke blank stares. Normative discussions are, seemingly, about something above-and-beyond and distinct from discussions of the physical and psychological aspects of a situation. When people debate whether or not it’s “wrong” to support the death penalty or “wrong” for women to abort unwanted pregnancies, for example, it seems obvious that physical and psychological facts are typically not the core (or at least only) thing in dispute.
G.E. Moore’s “Open Question Argument” elaborates on this objection. The argument also raises the point that that, in many cases where we are inclined to ask “What should I do?”, it seems like what we are inclined to ask goes above-and-beyond any individual question we might ask about the natural world. Consider again the case where we are considering a career change and wondering what we should do. It seems like we could know all of the natural facts—facts like how happy will we be on average while pursuing each career, how satisfied will we feel looking back on each career, how many lives we could improve by donating money made in each career, what labor practices each company has, how disappointed our parents will be if we pursue each career, how our personal values will change if we pursue each career, what we would end up deciding at the end of one hypothetical deliberative process or another, etc. -- and still retain the inclination to ask, “Given all this, what should I do?” This means that—insofar as we’re taking the realist stance that this question actually has a meaningful answer, rather than rejecting the question as vacuous—the claim that we “should” do one thing or another cannot easily be understood as a claim about the natural world. A set of claims about the natural world may support the claim that we should make a certain decision, but, in cases such as this one, it seems like no set of claims about the natural world is equivalent to the claim that we should make a certain decision.
A last objection to mention is Parfit’s “Triviality Objection” (On What Matters, Section 95). The basic intuition behind Parfit’s objection is that pretty much any attempt to define the word “should” in terms of natural properties would turn many normative claims into puzzling assertions of either obvious tautologies or obvious falsehoods. For example, consider a man who is offered—at the end of his life, I guess by the devil or something—the option of undergoing a year of certain torture for a one-in-a-trillion chance of receiving a big prize: a trillion years of an equivalently powerful positive experience, plus a single lollipop. He is purely interested in experiencing pleasure and avoiding pain and would like to know whether he should take the offer. A decision theorist who endorses expected desire-fulfillment maximisation says that he “should,” since the lollipop tips the offer over into having slightly positive expected value. A decision theorist who endorses risk aversion says he “should not,” since the man is nearly certain to be horribly tortured without receiving any sort of compensation. In this context, it’s hard to understand how we could redefine the claim “He should take action X” in terms of natural properties and have this disagreement make any sense. We could define the phrase as meaning “Action X maximizes expected fulfillment of desire,” but now the first decision theorist is expressing an obvious tautology and the second decision theorist is expressing an obvious falsehood. We could also try, in keeping with a suggestion by Eliezer Yudkowsky, to define the phrase as meaning “Action X is the one that someone acting in a winning way would take.” But this is obviously too vague to imply a particular action; taking the gamble is associated with some chance of winning and some chance of losing. We could make the definition more specific—for instance, saying “Action X is the one that someone acting in a way that maximizes expected winning would take”—but now of course we’re back in tautology mode. The apparent upshot, here, is that many normative claims simply can’t be interpreted as non-trivially true or non-trivially false claims about natural properties. The associated disagreements only become sensible if we interpret them as being about something above-and-beyond these properties.
Of course, it is surely true that some of the claims people make using the word “should” can be understood as claims about the natural world. Words can, after all, be used in many different ways. But it’s the claims that can’t easily be understood in this way that non-naturalist realists such as Parfit, Enoch, and Moore have in mind. In general, I agree with the view that the key division in metaethics is between self-identified non-naturalist realists on the one hand and self-identified anti-realists and naturalist realists on the other hand, since “naturalist realists” are in fact anti-realists with regard to the distinctively normative properties of decisions that non-naturalist realists are talking about. If we rule out non-naturalist realism as a position then it seems the main remaining question is a somewhat boring one about semantics: When someone makes a statement of form “A should do X,” are they most commonly expressing some sort of attitude (non-cognitivism), making a claim about the natural world (naturalist realism), or making a claim about some made-up property that no actions actually possess (error theory)?
Here, for example, is how Michael Huemer (a non-naturalist realist) expresses this point in his book Ethical Intuitionism (pg. 8):
[Non-naturalist realists] differ fundamentally from everyone else in their view of the world. [Naturalist realists], non-cognitivists, and nihilists all agree in their basic view of the world, for they have no significant disagreements about what the non-evaluative facts are, and they all agree that there are no further facts over and above those. They agree, for example, on the non-evaluative properties of the act of stealing, and they agree, contra the [non-naturalist realists], that there is no further, distinctively evaluative property of the act. Then what sort of dispute do the [three] monistic theories have? I believe that, though this is not generally recognized, their disputes with each other are merely semantic. Once the nature of the world ‘out there’ has been agreed upon, semantic disputes are all that is left.
I think this attitude is in line with the viewpoint that Luke Muehlhauser expresses in his classic LessWrong blog post on what he calls “pluralistic moral reductionism.” PMR seems to me to be the view that: (a) non-naturalist realism is false, (b) all remaining meta-normative disputes are purely semantic, and (c) purely semantic disputes aren’t terribly substantive and often reflect a failure to accept that the same phrase can be used in different ways. If we define the view this way, then, conditional on non-naturalist realism being false, I believe that PMR is the correct view. I believe that many non-naturalist realists would agree on this point as well.
- ↩︎
This point is made by Parfit in On What Matters. He writes: “We could not have decisive reasons to believe that there are no such normative truths, since the fact that we had these reasons would itself have to be one such truth. This point may not refute this kind of skepticism, since some skeptical arguments might succeed even if they undermined themselves. But this point shows how deep such skepticism goes, and how blank this skeptical state of mind would be” (On What Matters, Section 86).
- ↩︎
The PhilPapers survey suggests that philosophers who favor realism outweigh philosophers who favor anti-realism by about a 2:1 ratio.
- Why Realists and Anti-Realists Disagree by 5 Jun 2020 7:51 UTC; 61 points) (EA Forum;
- Against Irreducible Normativity by 9 Jun 2020 14:38 UTC; 48 points) (EA Forum;
- Moral uncertainty vs related concepts by 11 Jan 2020 10:03 UTC; 26 points) (
- Value uncertainty by 29 Jan 2020 20:16 UTC; 20 points) (
- 22 Nov 2019 14:01 UTC; 19 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- Can we always assign, and make sense of, subjective probabilities? by 17 Jan 2020 3:05 UTC; 11 points) (
- 26 Nov 2019 1:31 UTC; 7 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 23 Nov 2019 13:17 UTC; 6 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 27 Nov 2019 2:07 UTC; 5 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 8 May 2020 15:29 UTC; 4 points) 's comment on I’m Buck Shlegeris, I do research and outreach at MIRI, AMA by (EA Forum;
- 16 Jan 2020 2:27 UTC; 1 point) 's comment on Moral uncertainty vs related concepts by (
Speaking for myself (though I think many other rationalists think similarly), I approach this question with a particular mindset that I’m not sure how to describe exactly, but I would like to gesture at with some notes (apologies if all of these are obvious, but I want to get them out there for the sake of clarity):
Abstractions tend to be leaky
As Sean Carroll would say, there are different “ways of talking” about phenomena, on different levels of abstraction. In physics, we use the lowest level (and talk about quantum fields or whatever) when we want to be maximally precise, but that doesn’t mean that higher level emergent properties don’t exist. (Just because temperature is an aggregate property of fast moving particles, doesn’t mean that heat isn’t “real”.) And it would be a total waste of time not to use the higher level concepts when discussing higher level phenomena (e.g. temperature, pressure, color, consciousness, etc.)
Various intuitive properties that we would like systems to have may turn out to be impossible, either individually, or together. Consider Arrow’s theorem for voting systems, or Gödel’s incompleteness theorems. Does the existence of these results mean that no voting system is better than any other? Or that formal systems are all useless? No, but they do mean that we may have to abandon previous ideas we had about finding the one single correct voting procedure, or axiomatic system. We shouldn’t stop talking about whether a statement is provable, but, if we want to be precise, we should clarify which formal system we’re using when we ask the question.
Phenomena that a folk or intuitive understanding sees as one thing, often turn out to be two (or more) things on careful inspection, or to be meaningless in certain contexts. E.g. my compass points north. But if I’m in Greenland, where it points, and the place where the rotational axis of the earth meets the surface, aren’t the same thing anymore. And if I’m in space, there just is no north anymore (or up, for that matter).
When you go through an ontological shift, and discover that the concepts you were using to make sense of the world aren’t quite right, you don’t have to just halt, melt, and catch fire. It doesn’t mean that all of your past conclusions were wrong. As Eliezer would say, you can rescue the utility function.
This state of having leaky abstractions, and concepts that aren’t quite right, is the default. It is rare that an intuitive or folk concept survives careful analysis unmodified. Maybe whole numbers would be an example that’s unmodified. But even there, our idea of what a ‘number’ is is very different from what people thought a thousand years ago.
With all that in mind as background, when I come to the question of morality or normativity, it seems very natural to me that one might conclude that there is no single objective rule, or set of rules or whatever, that exactly matches our intuitive idea of “shouldness”.
Does that mean I can’t say which of two actions is better? I don’t think so. It means that when I do, I’m probably being a bit imprecise, and what I really mean is some combination of the emotivist statement referenced in the post, plus a claim about what consequences will follow from the action, combined with an implicit expression of belief about how my listeners will feel about those consequences, etc.
I think basically all of the examples in the post of rationalists using normative language can be seen as examples of this kind of shorthand. E.g. saying that one should update one’s credences according to Bayes’s rule is shorthand for saying that this procedure will produce the most accurate beliefs (and also that I, the speaker, believe it is in the listener’s best interest to have accurate beliefs, and etc.).
For me it seems like a totally natural and unsurprising state of affairs for someone to both believe that there is no single precise definition of normativity that perfectly matches our folk understanding of shouldness (or that otherwise is the objectively “correct” morality), and also for that person to go around saying that one should do this or that, or that something is the right thing to do.
Similarly, if your physicist friend says that two things happened at the same time, you don’t need to play gotcha and say, “Ah, but I thought you said there was no such thing as absolute simultaneity.” You just assume that they actually mean a more complex statement, like “Approximately at the same time, assuming the reference frame of someone on the surface of the Earth.”
A folk understanding of morality might think it’s defined as:
what everyone in their hearts knows is right
what will have the best outcomes for me personally in the long run
what will have the best outcomes for the people I care about
what God says to do
what makes me feel good to do after I’ve done it
what other people will approve of me having done
And then it turns out that there just isn’t any course of action, or rule for action, that satisfies all those properties.
My bet is that there just isn’t any definition of normativity that satisfies all the intuitive properties we would like. But that doesn’t mean that I can’t go around meaningfully talking about what’s right in various situations, anymore than the fact that the magnetic pole isn’t exactly on the axis of rotation means that I can’t point in a direction if someone asks me which way is north.
I’m not sure if my position would be considered “moral anti-realist”, but if so, it seems to me a bit like calling Einstein a “space anti-realist”, or a “simultaneity anti-realist”. Einstein says that there is space, and there is simultaneity. They just don’t match our folk concepts.
I feel like my position is more like, “we actually mean a bunch of different related things when we use normative language and many of those can be discussed as matters of objective fact” than “any discussion of morality is vacuous”.
Does that just mean I’m an anti-realist (or naturalist realist?) and not an error theorist?
EDIT: after following the link in the footnotes to Luke’s post on Pluralistic Moral Reductionism, it seems like I am just advocating the same position.
EDIT2: But given that the author of this post was aware of that post, I’m surprised that he thought rationalist’s use of normative statements was evidence of contradiction (or tension), rather than of using normative language in a variety of different ways, as in Luke’s post. Does any of the tension survive if you assume the speakers are pluralistic moral reductionists?
That’s a great way to describe it. I think this is completely normal for anti-realists (at least in EA and rationality). Somehow the realists rarely seem to pass the Ideological Turing Test for anti-realism (of course, similar things can be said for the other direction and I think Ben Garfinkel’s post explains really well some of the intuitions that anti-realists might be missing, or ways in which some might simplify their picture).
Quite related: The Wikipedia page on Anti-realism was recently renamed to “Nihilism.” While that’s ultimately just semantics, I think this terminological move is insane. It’s a bit as though the philosophers who believe in Libertarian Free Will had conspired to only use the term “Fatalism” for both Determinism and Compatibilism.
Re-posting a link here, on the off-chance it’s of interest despite its length. ESRogs and I also had a parallel discussion on the EA Forum, which led me to write up this unjustifiably lengthy doc partly in response to that discussion and partly in response to the above comment.
Thanks for this! My thinking is similar (I have an early draft about why realists and anti-realists diagree with one another, and have been trying to get closer to passing the Ideological Turing Test for realism. It was good to be able to compare my thinking to that of someone with stronger sympathies toward realism!)
I wish when people did this kind of thing (i.e., respond to other people’s ideas, arguments, or positions) they would give some links or quotes, so I can judge whether whatever they’re responding to is being correctly understood and represented. In this case, I feel like there aren’t actually that many people who identify as normative anti-realists (i.e., deny that any kind of normative facts exist). More often I see people who are realist about rationality, but anti-realist, subjectivist, or relativist about morality. (See my Six Plausible Meta-Ethical Alternatives for a quick intro to these distinctions.)
Your footnote 1 suggests that maybe you think these distinctions don’t really exist (or something like that) and therefore we should just consider realism vs anti-realism, where realism means that all types of normative facts exist and anti-realism means that all types of normative facts don’t exist. If so, I think this needs to be explicitly spelled out and defended before you start assuming it.
Fair point!
It’s definitely possible I’m underestimating the popularity of realist views. In which case, I suppose this post can be take as a mostly redundant explanation of why I think people are sensible to have these views :)
I guess there are few reasons I’ve ended up with the impression that realist views aren’t very popular.
People are often very dismissive of “moral realism.” (If this doesn’t seem right, I think I should be able to pull up quotes.) But nearly all standard arguments against moral realism also function as arguments against “normative realism” as well. The standard concerns about ‘spookiness’ and ungrounded epistemology arise as soon as we accept that there are facts of the matter about what we should do and that we can discover these facts; it doesn’t lessen the fundamental metaphysical or epistemological issues whether these facts, for example, tell us to try to maximize global happiness or to try to fulfill the preferences of some particular idealized version of ourselves. It also seems to be the case that philosophers who identify as “moral anti-realists” are typically anti-realists about normativity, which I think partly explains why people seldom bother to tease the terms “moral realist” and “normative realist” apart in the first place. So I suppose I’ve been leaning on a prior that people who identify as “moral anti-realists” are also “normative anti-realists.”
(Edit) It seems pretty common for people in the community to reject or attack the idea of “shoulds.” For example, many posts in the (popular?) “Replacing Guilt” sequence on Minding Our Way seem to do this. A natural reading is a rejection of normative realism.
Small-n, but the handful of friends I’ve debated moral realism with have also had what I would tend to classify as anti-realist attitudes toward normativity more generally.
If normative realism is correct, then it’s at least conceivable that the action it’s most “reasonable” for us to take in some circumstance (i.e. the action that we “should’ take”) is different from the action that someone who tends to “win” a lot over the course of their life would take. However, early/foundational community writing seems to reject the idea that there’s any meaningful conceptually distinct sense in which we can talk about an action being “reasonable.” I take this Eliezer post on decision theory and rationality as an example.
It might also be useful to clarify that in ricraz’s recent post criticizing “realism about rationality,” several of the attitudes listed aren’t directly related to “realism” in the sense of this post. For example, it’s possible for there to be “a simple yet powerful theoretical framework which describes human intelligence” even if normative anti-realism is true. It did seem to me like the comments on ricraz’s post leaned toward wariness of “realism,” as conceptualized there, but I’m not really sure how to map that onto attitudes about the notion of “realism” I have in mind here.
It’s important to disentangle two claims:
1. In general, if you have the goal of understanding the world, or any other goal that relies on doing so, being Bayesian will allow you to achieve it to a greater extent than any other approach (in the limit of infinite compute).
2. Regardless of your goals, you should be Bayesian anyway.
Believing #2 commits you to normative realism as I understand the term, but believing #1 doesn’t - #1 is simply an empirical claim about what types of cognition tend to do best towards a broad class of goals. I think that many rationalists would defend #1, and few would defend #2 - if you disagree, I’d be interested in seeing examples of the latter. (One test is by asking “Aside from moral considerations, if someone’s only goal is to have false beliefs right now, should they believe true things anyway?”) Either way, I agree with Wei that distinguishing between moral normativity and epistemic normativity is crucial for fruitful discussions on this topic.
Another way of framing this distinction: assume there’s one true theory of physics, call it T. Then someone might make the claim “Modelling the universe using T is the correct way to do so (in the limit of having infinite compute available).” This is analogous to claim #1, and believing this claim does not commit you to normative realism, because it does not imply that anyone should want to model the universe correctly.
I would characterise “realism about rationality” as approximately equivalent to claim #1 above (plus a few other similar claims). In particular, it is a belief about whether there is a set of simple ideas which elegantly describe the sort of “agents” who do well at their “goals”—not a belief about the normative force of those ideas. Of course, under most reasonable interpretations of #2, the truth of #2 implies #1, but not vice versa.
Eliezer used some pretty strong normative language when talking about having false beliefs, e.g. in Dark Side Epistemology:
The quote from Eliezer is consistent with #1, since it’s bad to undermine people’s ability to achieve their goals.
More generally, you might believe that it’s morally normative to promote true beliefs (e.g. because they lead to better outcomes) but not believe that it’s epistemically normative, in a realist sense, to do so (e.g. the question I asked above, about whether you “should” have true beliefs even when there are no morally relevant consequences and it doesn’t further your goals).
I don’t necessarily think that #2 is a common belief. But I do have the impression that many people would at least endorse this equally normative claim: “If you have the goal of understanding the world, you should be a Bayesian.”
In general—at least in the context of the concepts/definitions in this post—the inclusion of an “if” clause doesn’t prevent a claim from being normative. So, for example, the claim “You should go to Spain if you want to go to Spain” isn’t relevantly different from the claim “You should give money to charity if you have enough money to live comfortably.”
I agree there’s an important distinction, but it doesn’t necessarily seem that deep to me.
For example: We can define different “epistemic utility functions” that map {agent’s credences; state of the world} to real values and then discuss theories like Bayesianism in the context of “epistemic decision theory,” in relatively close analogy with traditional (practical) decision theory.
It seems like some theories—e.g. certain theories that say we should have faith in the existance of God, or theories that say that we shouldn’t take into account certain traits when forming impressions of people—might also be classified as both moral and epistemological.
Okay, this seems like a crux of our disagreement. This statement seems pretty much equivalent to my statement #1 in almost all practical contexts. Can you point out how you think they differ?
I agree that some statements of that form seem normative: e.g. “You should go to Spain if you want to go to Spain”. However, that seems like an exception to me, because it provides no useful information about how to achieve the goal, and so from contextual clues would be interpreted as “I endorse your desire to go to Spain”. Consider instead “If you want to murder someone without getting caught, you should plan carefully”, which very much lacks endorsement. Or even “If you want to get to the bakery, you should take a left turn here.” How do you feel about the normativity of the last statement in particular? How does it practically differ from “The most convenient way to get to the bakery from here is to take a left turn”? Clearly that’s something almost everyone is a realist about (assuming a shared understanding of “convenient”) at Less Wrong and elsewhere.
I think there’s a difference between a moral statement with conditions, and a statement about what is best to do given your goals (roughly corresponding to the difference between Kant’s categorical and hypothetical imperatives). “You should give money to charity if you have enough money to live comfortably” is an example of the former—it’s the latter which I’m saying aren’t normative in any useful sense.
This stuff is definitely a bit tricky to talk about, since people can use the word “should” in different ways. I think that sometimes when people say “You should do X if you want Y” they do basically just mean to say “If you do X you will receive Y.” But it doesn’t seem to me like this is always the case.
A couple examples:
1. “Bayesian updating has a certain asymptoptic convergence property, in the limit of infinite experience and infinite compute. So if you want to understand the world, you should be a Bayesian.”
If the first and second sentence were meant to communicate the same thing, then the second would be totally vacuous given the first. Anyone who accepted the first sentence could not intelligibly disagree with or even really consider disagreeing with the second. But I don’t think that people who say things like this typically mean for the second sentence to be vacuous or typically regard disagreement as unintelligible.
Suppose, for example, that I responded to this claim by saying something like: “I disagree. Since we only have finite lives, asymptoptic convergence properties don’t have direct relevance. I think we should instead use a different ‘risk averse’ updating rule that, for agents with finite lives, more strongly reduces the likelihood of ending up with especially inaccurate beliefs about key features of the world.”
The speaker might think I’m wrong. But if the speaker thinks that what I’m saying constitutes intelligible disagreement with their claim, then it seems like this means their claim is in fact a distinct normative one.
2. (To someone with no CS background) “If you want to understand the world, you should be a Bayesian.”
If this sentence were meant to communicate the same thing as the claim about asymptotic convergence, then the speaker shouldn’t expect the listener to understand what they’re saying (even if the speaker has already explained what it means to be a Bayesian). Most people don’t naturally understand or care at all about asymptotic convergence properties.
I was a little imprecise in saying that they’re exactly equivalent—the second sentence should also have a “in the limit of infinite compute” qualification. Or else we need a hidden assumption like “These asymptotic convergence properties give us reason to believe that even low-compute approximations to Bayesianism are very good ways to understand the world.” This is usually left implicit, but it allows us to think of “if you want to understand the world, you should be (approximately) a Bayesian” as an empirical claim not a normative one. For this to actually be an example of normativity, it needs to be the case that some people consider this hidden assumption unnecessary and would endorse claims like “You should use low-compute approximations to Bayesianism because Bayesianism has certain asymptotic convergence properties, even if those properties don’t give us any reason to think that low-compute approximations to Bayesianism help you understand the world better.” Do you expect that people would endorse this?
Hmm, I think focusing on a simpler case might be better for getting at the crux.
Suppose Alice says: “Eating meat is the most effective way to get protein. So if you want to get protein, you should eat meat.”
And then Bob, an animal welfare person, responds: “You’re wrong, people shouldn’t eat meat no matter how much they care about getting protein.”
If Alice doesn’t mean for her second sentence to be totally redundant—or if she is able to interpret Bob’s response as an intelligible (if incorrect) statement of disagreement with her second sentence—then that suggests her second sentence actually constitutes a substantively normative claim. Her second sentence isn’t just repeating the same non-normative claim as the first one.
I definitely don’t think that all “If you want X, do Y” claims are best understood as normative claims. It’s possible that when people make claims of this form about Bayesianism, and other commonly discussed topics, they’re not really saying anything normative. But a decent chunk of statements of this form do strike me as difficult to interpret in non-normative terms.
I don’t think you can declare a sentence redundant without also considering the pragmatic aspects of meaning. In this example, Alice’s second sentence is a stronger claim than the first, because it again contains an implicit clause: “If you want to get protein, and you don’t have any other relevant goals, you should eat meat”. Or maybe it’s more like “If you want to get protein, and your other goals are standard ones, you should eat meat.”
Compare: Alice says “Jumping off cliffs without a parachute is a quick way to feel very excited. If you want to feel excited, you should jump off cliffs without a parachute.” Bob says “No you shouldn’t, because you’ll die.” Alice’s first sentence is true, and her second sentence is false, so they can’t be equivalent—but both of them can be interpreted as goal-conditional empirical sentences. It’s just the case that when you make broad statements, pragmatically you are assuming a “normal” set of goals.
It’s not entirely unintelligible, because Alice is relying on an implicit premise of “standard goals” I mentioned above, and the reason people like Bob are so outspoken on this issue is because they’re trying to change that norm of what we consider “standard goals”. I do think that if Alice really understood normativity, she would tell Bob that she was trying to make a different type of claim to his one, because his was normative and hers wasn’t—while conceding that he had reason to find the pragmatics of her sentence objectionable.
Also, though, you’ve picked a case where the disputed statement is often used both in empirical ways and in normative ways. This is the least clear sort of example (especially since, pragmatically, when you repeat almost the same thing twice, it makes people think you’re implying something different). The vast majority of examples of people using “if you want..., then you should...” seem clearly empirical to me—including many that are in morally relevant domains, where the pragmatics make their empirical nature clear:
A: “If you want to murder someone without getting caught, you should plan carefully.”
B: “No you shouldn’t, because you shouldn’t murder people.”
A: “Well obviously you shouldn’t murder people, but I’m just saying that if you wanted to, planning would make things much easier.”
Upon further thought, maybe just splitting up #1 and #2 is oversimplifying. There’s probably a position #1.5, which is more like “Words like “goals” and “beliefs” only make sense to the extent that they’re applied to Bayesians with utility functions—every other approach to understanding agenthood is irredeemably flawed.” This gets pretty close to normative realism because you’re only left with one possible theory, but it’s still not making any realist normative claims (even if you think that goals and beliefs are morally relevant, as long as you’re also a moral anti-realist). Maybe a relevant analogy: you might believe that using any axioms except the ZFC axioms will make maths totally incoherent, while not actually holding any opinion on whether the ZFC axioms are “true”.
I think there’s a distinction (although I’m not sure if I’ve talked explicitly about it before). Basically there’s quite possibly more to what the “right” or “reasonable” action is than “what action that someone who tends to ‘win’ a lot over the course of their life would take?” because the latter isn’t well defined. In a multiverse the same strategy/policy would lead to 100% winning in some worlds/branches and 100% losing in other worlds/branches, so you’d need some kind of “measure” to say who wins overall. But what the right measure is seems to be (or could be) a normative fact that can’t be determined by just looking at or thinking “who tends to ‘win’ a lot’.
ETA: Another way that “tends to win” isn’t well defined is that if you look at the person who literally wins the most, they might just be very lucky instead of actually doing the “reasonable” thing. So I think “tends to win” is more of an intuition pump for what the right conception of “reasonable” is than actually identical to it.
I agree with you on this and think it’s a really important point. Another (possibly redundant) way of getting at a similar concern, without evoking MW:
Due to randomness/uncertainty, an agent that tries to maximize expected “winning” won’t necessarily win compared to an agent that does something else. If I spend a dollar on a lottery ticket with a one-in-a-billion chance of netting me a billion-and-one “win points,” then I’m taking the choice that maximizes expected winning but I’m also almost certain to lose. So we can’t treat “the action that maximizes expected winning” as synonymous with “the action taken by an agent that wins.”
We can try to patch up the issue here by defining “the action that I should take” as “the action that is consistent with the VNM axioms,” but in fact either action in this case is consistent with the VNM axioms. The VNM axioms don’t imply that an agent must maximize the expected desirability of outcomes. They just imply that an agent must maximize the expected value of some function. It is totally consistent with the axioms, for example, to be risk averse and instead maximize the expected square root of desirability. If we try to define “the action I should take” in this way, then, as another downside, the claim “your actions should be consistent with the VNM axioms” also becomes a completely empty tautology.
So it seems very hard to make non-vacuous and potentially true claims about decision theory without evoking some additional non-reducible notion of “reasonableness,” “rationality,” or what an actor “should” do. Assuming that normative anti-realism is true pretty much means assuming that there is no such notion or assuming that the notion doesn’t actually map onto anything in reality. And I think anti-realist views of these sort are plausible (probably for roughly the same reasons Eliezer seems to). But I think that adopting these views would also leave us with very little to say about decision theory.
I wouldn’t expect lesswrongians to be keen on Platonic style moral realism, where moral facts correspond to supernatural objects, but there are other classes of morally realistic theories’ where moral facts depend on analytical truths or natural states of affairs. Lesswrongians are definitely keen utilitarianism, where ethical claims depend on natural facts about preferences, and is therefore, arguably, a naturalistic form of moral realism.
The is-ought gap remains a problem which I touch on below.
If normative realism is just the claim that there are meaningful and true statements about what you should do if you want to achieve some X, then they are abundant.. things like game theory and engineering, actually any kind of methodology, have plenty of them.
What are the problems with normative realism about moral claims, then?Maybe that they are categorical, lacking an “if you want to do X” condition.
This seems wrong to me. Could you say more about why you think this?
I left a sub-comment under Wei’s comment (above) that hopefully unpacks this suggestion a bit
Seconding this—my strong impression is that a substantial percentage of the rationality community rejects moral realism, not normative realism (as you say—what would the point of anything be?).
I’m curious where this impression came from. The only place I can imagine anything similar to an argument against normative realism cropping up would be in a discussion of the problem of induction, which hasn’t seen serious debate around here for many years.
It sounds as though you’re expecting anti-realists about normativity to tell you some arguments that will genuinely make you feel (close to) indifferent about whether to use Bayesianism, or whether to use induction. But that’s not how I understand anti-realism. The way I would describe it, the primary claim that anti-realism about normativity entails is of a more trivial kind. More something like this:
If anti-realism about normativity is true, then in a hypothetical world where your mind worked in some strange way such that you found induction or Bayesianism dumb, then it’s impossible to point out and justify the exact sense in which you would be mistaken by some “universally approved standard.” So the question shouldn’t be “Have I ever seen someone give a an argument to start doubting induction?” Rather, I would ask “Have I ever seen someone give a convincing and non-question-begging account of what aliens who don’t believe in induction are doing wrong?”
In practice, the difference between realism and anti-realism only matters in cases where the answer doesn’t feel like the straightforward thing to do anyway. If Bayesianism and induction feel like the straightforward thing for you to do, you’ll use them whether you endorse realism or not. I’d argue that realists therefore shouldn’t use example propositions that provoke universal agreement (at least not as standalone examples) when they want to explain what constitutes an objective reason. Because by using examples that evoke universal agreement, they’re only pointing at reasons that we can already tell will feel convincing to people. The interesting question I want to know, as an anti-realist, is what it means for there to be irreducibly normative reasons that go beyond what I personally find convincing. The realists seem to think that just like in cases where we’re inclined to call a proposition “right” because it feels self-evident to everyone, there’s just as much of a fact of the matter for other propositions about which people will be in seemingly irresolveble disagreements. But I have yet to see how that’s a useful concept to introduce. I just don’t get it.
Edit:
I was strawmanning realism a bit here. Realists readily point out that the sense in which this is a mistake cannot be “explained” (at least not in non-question-begging terminology, i.e., not without the use of normative terminology). So in one sense, realism is simply a declaration that the intuition that some standards apply beyond the personal/subjective level is too important to give up on. But by itself, that declaration doesn’t yet make for a specific position, and it depends on further assumptions whether the disagreement will be only semantic, or also substantive.
Hm, this actually isn’t an expectation I have. When I talk about “realists” and “anti-realists,” in this post, I’m thinking of groups of people with different beliefs (rather than groups of people with different feelings). I don’t think of anti-realism as having any strong link to feelings of indifference about behavior. For example: I certainly expect most anti-realist philosophers to have strong preferences against putting their hands on hot stoves (and don’t see anything inconsistent in this).
I guess I don’t see it as a matter of usefulness. I have this concept that a lot of other people seem to have too: the concept of the choice I “should” make or that it would be “right” for me to make. Although pretty much everyone uses these words, not everyone reports having the same concept. Nonetheless, at least I do have the concept. And, insofar as there is any such thing as the “right thing,” I care a lot about doing it.
We can ask the question: “Why should people care about doing what they ‘should’ do?” I think the natural response to this question, though, is just sort of to evoke a tautology. People should care about doing what they should do, because they should do what they should do.
To put my “realist hat” firmly on for a second: I don’t think, for example, that someone happily abusing their partner would in any way find it “useful” to believe that abuse is wrong. But I do think they should believe that abuse is wrong, and take this fact into account when deciding how to act, because abuse is wrong.
I’m unfortunately not sure, though, if I have anything much deeper or more compelling than that to say in response to the question.
Another (significantly more rambling and possibly redundant) thought on “usefulness”:
One of the main things I’m trying to say in the post—although, in hindsight, I’m unsure if I communicated it well—is that there are a lot of debates that I personally have trouble interpretting as both non-trivial and truth-oriented if I assume that the debaters aren’t employing irreducably normative concepts. A lot of debates about decision theory have this property for me.
I understand how it’s possible for realists to have a substantive factual disagreement about the Newcomb scenario, for example, because they’re talking about something above-and-beyond the traditional physical facts of the case (which are basically just laid out in the problem specification). But if we assume that there’s nothing above-and-beyond the traditional physical facts, then I don’t see what there’s left for anyone to have a substantive factual disagree about.
If we want to ask “Which amount of money is the agent most likely to receive, if we condition on the information that it will one-box?”, then it seems to me that pretty much everyone agrees that “one million dollars” is the answer. If we want to ask “Would the agent get more money in a counterfactual world where it instead two-boxes, but all other features of the world at that time (including the contents of the boxes) are held fixed?”, then it seems to me that pretty much everyone agrees the answer is “yes.” If we want to ask “Would the agent get more money in a counterfactual world where it was born as a two-boxer, but all other features of the world at the time of its birth were held fixed?”, then it seems to me that pretty much everyone agrees the answer is “no.” So I don’t understand what the open question could be. People may of course have different feelings about one-boxing and about two-boxing, in the same way that people have different feelings about (e.g.) playing tennis and playing soccer, but that’s not a matter of factual/substantive disagreement.
So this is sort of one way in which irreducably normative concepts can be “useful”: they can, I think, allow us to make sense of and justify certain debates that many people are strongly inclined to have and certain questions that many people are strongly inclined to ask.
But the above line line of thought of course isn’t, at least in any direct way, an argument for realism actually being true. Even if the line of thought is sound, then it’s still entirely totally possible that these debates and questions just actually aren’t non-trivial and truth-oriented. Furthermore, the line of thought could also just not be sound. It’s totally possible that the debates/questions are non-trivial and truth-oriented, without evoking irreducably normative concepts, and I’m just a confused outside observer not getting what’s going on. Tonally, one thing I regret about the way I wrote this post is that I think it comes across as overly skeptical of this possibility.
Yeah, that makes sense. I was mostly replying to T3t’s comment, especially this part:
Upon re-reading T3t’s comment, I now think I interpreted them uncharitably. Probably they meant that because induction seems impossible to justify, one way to “explain” this or come to terms with this is by endorsing anti-realism. (That interpretation would make sense to me!)
I see. I think I understand the motivation to introduce irreducibly normative concepts into one’s philosophical repertoire. Therefore, saying “I don’t see the use” was a bit misleading. I think I meant that even though I understand the motivation, I don’t actually think we can make it work. I also kind of see the motivation behind wanting libertarian free will, but I also don’t think that works (and probably you’d agree on that one). So, I guess my main critique is that irreducibly normative concepts won’t add anything we can actually make use of in practice, because I don’t believe that your irreducibly normative concepts can ever be made coherent. I claim that if we think carefully about how words get their meaning, and then compare the situation with irreducibly normative concepts to other words, it’ll become apparent that the irreducibly normative concepts have connotations that cannot go together with each other (at least not under the IMO proper account of how words get their meaning).
So far, the arguments for my claim are mostly just implicitly in my head. I’m currently trying to write them up and I’ll post them on the EA forum once it’s all done. (But I feel like there’s a sense in which the burden of proof isn’t on the anti-realists here. If I was a moral realist, I’d want to have a good sense of how I could, in theory under ideal conditions, figure out normative truths. Or, if I accept the interpretation that it’s conceivable that humans are forever incapable of figuring out normative truths, I’d at least need to have *some sense* of what it would mean for someone to not be forever incapable of figuring things out. Otherwise, how could I possibly believe that I understand my own concept well enough for it to have any meaning?)
I think it’s true that there’d be much fewer substantive disagreements if more people explicitly accepted anti-realism. I find it good because then things feel like progress (but that’s mostly my need for closure talking.) That said, I think there are some interesting discussions to be had in an anti-realist framework, but they’d go a bit differently.
Sure. In this sense, I’m an error theorist (as you point out as a possibility in your last paragraph). But I think there’s a sense in which that’s a misleading label. When I shifted from realism to anti-realism, I didn’t just shrug my shoulders thinking “oh no, I made an error” and then stopped being interested in normative ethics (or normative decision theory). Instead, I continued to be very interested in these things, but started thinking about them in different ways. So even though “error theory” is the appropriate label in one way, there’s another sense in which the shift is about how to handle ontological crises.
What do you mean by a normative fact here? Could you give some examples?
The “morally normative” and “epistemically normative” examples in our conversation over on EAF are the kinds of things I’m referring to. ETA: Another example of a normative fact is if there is a right prior for a Bayesian.
The most popular meta-ethical views on LessWrong seem to be relatively realist ones, with views like non-cognitivism and error theory getting significantly less support. From the 2016 LessWrong diaspora survey (excluding people who didn’t pick one of the options):
772 respondents (39.5%) voted for “Constructivism: Some moral statements are true, and the truth of a moral statement is determined by whether an agent would accept it if they were undergoing a process of rational deliberation. ‘Murder is wrong’ can mean something like ‘Societal agreement to the rule “do
not murder” is instrumentally rational’.”
550 respondents (28.2%) voted for “Subjectivism: Some moral statements are true, but not universally, and the truth of a moral statement is determined by non-universal opinions or prescriptions, and there is no nonattitudinal determinant of rightness and wrongness. ‘Murder is wrong’ means something like
’My culture has judged murder to be wrong’ or ‘I’ve judged murder to be wrong’.”
346 respondents (17.7%) voted for “Substantive realism: Some moral statements are true, and the truth of a moral statement is determined by mind-independent moral properties. ‘Murder is wrong’ means that murder has an objective mind-independent property of wrongness that we discover by empirical investigation, intuition, or some other method.”
186 respondents (9.5%) voted for “Non-cognitivism: Moral statements don’t express propositions and can neither be true nor false. ‘Murder is wrong’ means something like ‘Boo murder!’.”
99 respondents (5.1%) voted for “Error theory: Moral statements have a truth-value, but attempt to describe features of the world that don’t exist. ‘Murder is wrong’ and ‘Murder is right’ are both false statements because moral rightness and wrongness aren’t features that exist.”
I suspect that a lot of rationalists would be happy to endorse any of the above five views in different contexts or on different framings, and would say that real-world moral judgment is complicated and doesn’t cleanly fit into exactly one of these categories. E.g., I think Luke Muehlhauser’s Pluralistic Moral Reductionism is just correct.
Thanks for sharing this, was not aware of the survey! Seems like this suggests I’ve gotten a skewed impression of the distribution of meta-ethical views, so in that sense the objection I raise in this post may only be relevant to a smaller subset of the community than I’d previously thought.
I agree with a lot of the spirit of PMR (that people use the word “should” to mean different things in different contexts), but think that there’s a particularly relevant and indespensible sense of the word “should” that points toward a not-easily-reducible property. Then the interesting non-semantic question to me—and to certain promiment “realists” like Enoch and Parfit—is whether any actions are actually associated with such a property.
(Within my cave of footnotes, I say a bit more on this point in FN14)
From looking at the footnotes, I think maybe you mean the one that begins, “These metaphysical and epistemological issues become less concerning if...” Wanted to note that this is showing up as #15 for me.
Example: Eliezer’s Extrapolated Volition is easy to round off to “constructivism”, By Which It May Be Judged to “substantive realism”, and Orthogonality Thesis and The Gift We Give To Tomorrow to “subjectivism”. I’m guessing it’s not a coincidence that those are also the most popular answers in the poll above, and that no one of them has majority support.
(Though I don’t think I could have made a strong prediction like this a priori. If non-cognitivism or error theory had done better, someone could have said “well, of course!”, citing LessWrong’s interest in signaling or their general reductionist/eliminativist/anti-supernaturalist tendencies.)
One thing I’m confused about this post is whether constructivism, subjectivism count as realisms. The cited realists (Enoch and Parfit) are substantive realists.
I agree that substantive realists are a minority in the rationality community, but not that constructivists + subjectivists + substantive realists are a minority.
Sayre-McCord in SEP’s “Moral Realism” article:
Joyce in SEP’s “Moral Anti-Realism” article:
So, everyone defines “non-realism” so as to include error theory and non-cognitivism; some people define it so as to also include all or most views on which moral properties are in some sense “subjective.”
These ambiguities seem like good reasons to just avoid the term “realism” and talk about more specific positions, though I guess it works to think about a sliding scale where substantive realism is at one extreme, error theory and non-cognitivism are at the other extreme, and remaining views are somewhere in the middle.
Terminology definitely varies. FWIW, the breakdown of normative/meta-normative views I prefer is roughly in line with the breakdown Parfit uses in OWM (although he uses a somewhat wonkier term for “realism”). In this breakdown:
“Realist” views are ones under which there are facts about what people should do or what they have reason to do. “Anti-realist” views are ones under which there are no such facts. There are different versions of “realism” that claim that facts about what people should do are either “natural” (e.g. physical) or “non-natural” facts. If we condition on any version realism, there’s then the question of what we should actually do. If we should only act to fulfill our own preferences—or pursue other similar goals that primarily have to do with our own mental states—then “subjectivism” is true. If we should also pursue ends that don’t directly have to do with our own mental states—for example, if we should also try to make other people happy—then “objectivism” is true.
It’s a bit ambiguous to me how the terms in the LessWrong survey map onto these distinctions, although it seems like “subjectivism” and “constructivism” as they’re defined in the survey probably would qualify as forms of “realism” on the breakdown I just sketched. I think one thing that sometimes makes discussions of normative issues especially ambiguous is that the naturalism/non-naturalism and objectivism/subjectivism axes often get blended together.
It seems like you’ve set up a dichotomy between there being universally compelling normative statements versus normative statements being meaningless, but what about the position that specific subsets of possible statements are compelling to specific people? Would that be realist, anti-realist, or neither?
If you mean “compelling” in the sense of “convincing” or “motivating,” then I actually don’t mean to suggest there are any “universally compelling normative statements.” I think it’s totally possible for there to be something that somone “should” do (e.g. being vegetarian), without this person either believing they should do it or acting on their belief.
This doesn’t seem too problematic to me, though, since most other kinds of statements also fail to be at least universally convincing. For example, I also think that the statement “the universe is billions of years old” is both true and not-universally-convincing. Some philosophers do still argue, though, that the failure of normative beliefs to consistently motivate people is a serious challenge for normative realism.
I think you’ve misunderstood the question, actually. “Compelling” here is to be read as in “No Universally Compelling Arguments”.
So the question that clone of saturn was asking, it seems to me (he can correct me if I’m misinterpreting) is: suppose I claim that it’s the case that Bob, or all humans, or all Americans living in Florida whose name begins with a ‘B’, or any other proper subset A of “all agents”, should do X. (And suppose that X is a general injunction, in which all terms are properly quantified, etc., so that its limited applicability is not due to any particular features of the situation(s) which agents in subset A find themselves in; in other words, “agents outside subset A should also do X” could be true, but—I claim—it is not.)
Now, is this realism, or anti-realism? I would not assent to the claim that “All agents should do [properly quantified] X”; yet nor would I assent to the claim “There is no fact of the matter about whether agents in subset A should do X”!
If there is anything that anyone should in fact do, then I would say that meets the standards of “realism.” For example, it could in principle turn out to be the case that the only normative fact is that the tallest man in the world should smile more. That would be an unusual normative theory, obviously, but I think it would still count as substantively normative.
I’m unsure whether this is a needlessly technical point, but sets of facts about what specific people should do also imply and are implied by facts about what everyone should do. For example, suppose that it’s true that everyone should do what best fulfills their current desires. This broad normative fact would then imply lots of narrow normative facts about what individual people should do. (E.g. “Jane should buy a dog.” “Bob should buy a cat.” “Ed should rob a bank.”) And we could also work backward from these narrow facts to construct the broad fact.
I interpret Eliezer’s post, perhaps wrongly, as focused on a mostly distinct issue. It reads to me like he’s primarily suggesting that for any given normative claim—for example, the claim that everyone should do what best fulfills their current desires or the claim that the tallest man should smile more—there is no argument that could convince any possible mind into believing the claim is true.
I agree with him at least on this point and think that most normative realists would also tend to agree.
Please let me know (either clone of saturn or Said) if it seems like I’m still not quite answering the right question :)
Does “anyone” refer to any human, or any possible being?
Because it it refers to humans, we could argue that humans have many things in common. For example, maybe any (non-psychopathic) human should donate at least a little to effective altruism, because effective altruism brings the change they would wish to happen.
But from the perspective of a hypothetical superintelligent spider living on Mars, donating to projects that effectively help humans is utterly pointless. (Assuming that spiders, even superintelligent ones, have zero empathy.)
I understand “moral realism” as a claim that there is a sequence of clever words that would convince the superintelligent spider that reducing human suffering is a good thing. Not merely because humans might reciprocate, or because it would mean more food for the spider once the space train to Mars is built, but because that is simply the right thing to do. Such thing, I believe, does not exist.
Sorry, I should have been clearer. I mean to say: “If there exists at least one entity, such that the entity should do something, then that meets the standards of ‘realism.’”
I don’t think I’m aware of anyone who identifies as a “moral realist” who believes this. At least, it’s not part of a normal definition of “moral realism.”
The term “moral realism” is used differently by different people, but typically it’s either used roughly synonymously with “normative realism” (as I’ve defined it in this post) or to pick out a slightly more specific position: that normative realism is true and that people should do things besides just try to fulfill their own preferences.
Some people seem to believe that about artificial intelligence. (Which will likely be more different from us than spiders are.)
OK. But does lack of universality imply lack of objectivity, or lack of realism?
Minimally, an objective truth is not a subjective truth, that is to say, it is not mind-dependent. Lack of mind dependence does not imply that objective truth needs to be the same everywhere, which is to say it does not imply universalism. I like to use the analogy of big G and little g in physics. Big G is a universal constant, little g is the local acceleration due to gravity, and will vary from planet to planet (and, in a fine-grained way, at different points on the earths surface). But little g is perfectly objective, for all its lack of universality.
So it implies lack of realism? Assuming you set the bar for realism rather high. But lack of realism in that sense does not imply subjectivism or error theory.
I suppose “locally objective” would be how I see morality.
Like, there are things you would hypothetically consider morally correct under sufficient reflection, but perhaps you didn’t do the reflection, or maybe you aren’t even good enough at doing reflection. But there is a sense in which you can be objectively wrong about what is the morally right choice. (Sometimes the wrong choice becomes apparent later, when you regret your actions. But this is simply reflection being made easier by seeing the actual consequences instead of having to derive them by thinking.)
But ultimately, morality is a consequence of values, and values exist in brains shaped by evolution and personal history. Other species, or non-biological intelligences, could have dramatically different values.
Now we could play a verbal game whether to define “values/morality” as “whatever given species desires”, and then concluded that other species would most likely have different morality; or define “values/morality” as “whatever neurotypical humans desire, on sufficient reflection”, and then conclude that other species most likely wouldn’t have any morality. But that would be a debate about the map, not the territory.
Values vary plenty between humans, too. Yudkowsky might need “human value” to be a coherent entity for his theories to work, but that isn’t evidence that human value is in fact coherent. And, because values vary, moral systems vary. You don’t have to go to another planet to see multiple tokens of the type “morality”.
You should in fact pay your taxes. Which is to say that if a socially defined obligation is enough, then realism is true. But that might be setting the bar too low.
I missed this post when it was recent, but I’m glad someone referred me to it! I really liked it and it made me more motivated to finalize some posts related to this topic that I’ve long been postponing. After reading this post, I upshifted the importance of discussing other types of normative realism besides moral realism.
As an anti-realist, I feel like you haven’t quite captured what anti-realism combined with an interest in EA and rationality can be like. I have a few comments about that (here and also below other people’s comments).
That’s interesting! The most intuitively compelling “argument” I have for anti-realism is that it very much feels to me as though there’s nothing worth wanting that anti-realists are missing. I’m pretty sure that you can get to a point where your intuitions also come to reflect that – though I guess one could worry about this being some kind of epistemic drift. That’ll be my ambitious aim with my anti-realism sequence: providing people with enough immersion into the anti-realist framework that it’ll start to feel as though nothing worth wanting is missing. :)
This rings hollow to me because you apply the realist sense of “something being true.” Of course anti-realism isn’t true in that way. But everything that you believe for reasons other than “I think this is true in the realist sense” will still remain with you under an anti-realist framework. In other words: As an anti-realist I’d recommend to stop caring about “objective reasons.” Most likely you’ll find that you can’t help but still care about what intuitively continues to feel like “reasons.” Then, think of those things as subjective reasons. This will feel like giving up on something extremely important, but it’s worth questioning whether that’s just an intuition rather than an actual loss.
I agree that anti-realists (my past and probably still current self included) often don’t pass the Ideological Turing Test. That said, my impression is that the anti-realist perspective is at least as strongly missing in some (usually Oxford-originating) EA circles than the realist perspective is missing among rationalists.
I agree. The closest anti-realist equivalent to moral uncertainty is what Brian Tomasik has called “valuing moral reflection.” Instead of having in mind a goal that’s fleshed out in direct terms, people might work toward an indirect goal of improved reflection, with the aim to eventually translate that into a direct goal. The important difference compared to the picture with moral realism is that not all the implications of valuing moral reflection are intuitive, and therefore, it’s not a “forced move.” Peer disagreement also works differently (I don’t update to the career choices of MMA fighters because I don’t think my personality is suitable for that type of leisure activity or profession, but I do update toward the life choices of people who are similar to me in certain relevant sensess.) I think this (improved clarity about ways of being morally uncertain) is probably the major way in which it has action-guiding consequences to get metaethics right. If I’m right about anti-realism, then people who consider themselves morally uncertain might not realize that they would have to cash this state of uncertainty out in some specific sense of “valuing moral reflection,” or that they might have underdetermined values. Perhaps underdetermined values are fine/acceptable – but that seems like the type of question that I at least would want to explicitly think about before implicitly deciding. (And for what it’s worth, I think there are quite strong reasons to value moral reflection to some degree as an anti-realist. I just think it’s complicated and not obvious, and people will likely come down on different sides on this if they realize that there’s a very real sense in which they are forced to take a stance on the object level, rather than taking what seems like the safe default of “being uncertain.”)
Argh! :D I think you might indeed be misunderstanding the point. I don’t think Nate gives “do what you want” as some kind of normative advice. Instead, I’m pretty this is meant in the “trivial” sense that people will by definition always do what they want, so they can continue to listen to their intuitions and subjective reasons without having to worry that they need to reach the exact same conclusions as everyone else. Nate is using the word “should” in the anti-realist sense. You’re still trying to interpret his statement with the realist “should” in mind – but anti-realists never use that type of “should.” (But maybe you were perfectly aware of that and you still insist on the realist sense of “should” because to you it seems like everything else doesn’t really matter? I often feel like the differences between realists and anti-realists comes down to intuitions like that.)
I agree that this looks interesting, and that it’s not trivial to explain why exactly one would seem to care as an anti-realist. But ultimately, I think the explanation is perfectly intuitive. People in the rationalist community like to systematize, and decision theory is about systematizing. People have intuitions about what’s the best way to carve out useful concepts. To me, it provides me with a rewarding sense of insight if I can disentangle different ways in which things like causality are or aren’t relevant to my intuitions about caring about real-world outcomes. There’s a lot of progress to be made in philosophy at the level of carving out useful distinctions, without necessarily taking normative stances. People often tend to take normative stances, but many times that’s not even the most interesting bit. Anyway, decision theory is like cocaine for a certain type of intellectually curious person, and there’s a chance it’ll be relevant to real-world outcomes involving happiness and suffering. So thinking about it makes for a better, more existentially satisfying life project than many other things (for the right type of person).
From one of the footnotes:
I agree. This is a very minor point, but I feel like it’s worth pointing out that premise (b) (“all remaining meta-normative disputes are purely semantic”) might be something that people could somewhat legitimately disagree with. I personally think premise (b) is obviously correct, but I’m always more “black and white” on questions like these than a lot of people whose reasoning I hold in high regard. The point I’m trying to make is that if you deny (b), you get a kind of interesting naturalist metaethical position that’s different and seemingly more “realist” than PMR. It seems to me that we can imagine a world where people (for some reason or another) just end up agreeing with each other on basically all normative questions. In that world, it would empirically be the case that whenever there are normative disagreement, they tend to eventually get resolved one way or another once certain misunderstandings are pointed out. Of course, if the hypothesis is spelled out this way, it seems relatively clear that this would be a very ambitious claim. Therefore, I think (b) is wrong. But quite a few people seem to think that if only we thought properly about the intrinsically motivating aspects of positive experiences, we’d all come to see that they are what matters, and from that, we could draw further conclusions toward a morality that will seem universally compelling to people who aren’t somehow conceptually confused. I think it’s worth having a name for that hypothesis. (In my introduction to moral realism, I called it “One Compelling Axiology,” but I’m not sure I like the name, and I also am a bit unhappy with how I explained the position in that post.)
Edited to add: I think Wei Dai has also described this position in his post about six metaethical possibilities, but I don’t think he gave it a name there.
Just wanted to say I really appreciate you taking the time to write up such a long, clear, and thoughtful response!
(If I have a bit of time and/or need to procrastinate anything in the near future, I may write up a few further thoughts under this comment.)
I was tempted to downvote your post, but refrained, seeing how much effort you put into it. Sadly, it seems to miss the point of non-realism entirely, at least the way I understand it. I am not a realist, and have been quite vocal about my views here. Admittedly, they are rather more radical than those of many here. Mostly out of necessity, since once you become skeptical about one realist position, then to be consistent you have to keep decompartmentalizing until the notions of reality, truth and existence become nothing more than useful models. This obviously applies to normative claims, as well, and so cognitivism is not wrong, but meaningless.
Consider thinking in terms of useful instead of true, it will magically remove all these internal contradictions you are struggling with. Sometimes it’s useful to follow the realist approach, and sometimes it doesn’t work and so you do or say something that an anti-realist would endorse. No need to be dogmatic about it.