Two Dogmas of LessWrong
Lesswrongers seem to largely be in agreement in rejecting robust moral realism and accepting physicalism about consciousness. This is a shame, I think, because both of these views are incorrect. The thing I find most frustrating about this is they tend to be supremely confident on topics when a hefty percentage of philosophers disagree with them. ‘Don’t believe things that are widespread in your ingroup with super high confidence when a large percentage of philosophers disagree with you’ seems to be a pretty good heuristic—yet LessWrongers seem not to adhere to it, at least, in evaluating these views.
I’d like to say at the outset a few things. First, this clearly doesn’t apply to all people on LessWrong. Second, I agree with LessWrongers on a huge number of things—AI risks, for example, as well as the desirability of effective altruism. I’m broadly on board with the project of being less wrong. Thus, my criticism of LW is less of the idea behind it, and more of the particular sets of beliefs that actual LessWrongers tend to have. Third, most of this will be a crosspost of things I’ve written elsewhere on my blog—I have no desire to reinvent the wheel when it comes to arguments for moral realism.
1 Moral Realism
0 An Introduction to moral realism
There are vast numbers of superficially clever arguments one can generate for crazy, skeptical conclusions; conclusions like that the external world doesn’t exist, we can’t know anything, memory isn’t reliable, and so on. These arguments, while interesting and no doubt useful if one ever comes across a real honest-to-god skeptic — a rather rare breed — don’t have much significance; skepticism exists as little more than a curiosity in the mind of the modern philosopher, something which takes real thought to refute, yet is not worth taking seriously as a serious set of views.
Yet there’s one1 form of extreme skepticism with actually existing trenchant advocates — real advocates who fill philosophy departments, rather than, like the external world or memory skeptic, merely being hypothetical advocates for the devil in philosophy papers. This skeptic is one who doubts that there are objective moral truths — moral facts made true not by the beliefs of any person.
Moral realism is the claim that there are true moral facts — ones that are not made true by anyone’s attitudes towards them. So if you think that the sentence that will follow this one is true and would be so even if no one else thought it was, you’re a moral realist. It’s typically wrong to torture infants for fun!
Now, no doubt to the moral anti-realist, my remarks sound harsh. How dare I compare them to the person who doubts anything can be really known.
“I refute the skeptic about the external world thus — ‘here’s one hand; here’s another hand,’ now show me the moral facts.”
— A hypothetical moral anti-realist
Well, in this article, I’ll explain why moral anti-realism is so implausible — while one always can accept the anti-realist conclusion, it’s always possible to bite the bullet on crazy conclusions. Yet moral anti-realism, much like anti-realism about the external world, is wildly implausible in what it says about the world.
We do not live in a bleak world, devoid of meaning and value. Our world is packed with value, positively buzzing with it, at least, if you know where to look, and don’t fall pray to crazy skepticism. Unfortunately the flip side of that is that the world is also packed full of disvalue — horrific, agonizing, pointless, meaningless suffering, suffering that flips the otherwise positive value of the hedonic register — that suffering must be eliminated as soon as possible. It is a moral emergency every second that it goes on.
In this article, I will defend moral realism. I will defend that it is, in fact, wrong to torture infants for fun — even if everyone disagreed. It’s no surprise that Moral realism is accepted by a majority of philosophers, though it’s certainly far from a universal view.
1 A Point About Methodology
Seeming is believing — as I hope to argue. Or, more specifically, if X seems the case to you, in general, that gives you some reason to think X is, in fact, the case. I’ve already addressed this in a previous article, so I’ll quote that.
Absent relying on what seems to be the case after careful reflection, we could know nothing, as (Huemer, 2007) has argued persuasively. Several cases show that intuitions are indispensable towards having any knowledge and doing any productive moral reasoning.
Any argument against intuitions is one that we’d only accept if it seems true after reflection, which once again relies on seemings. Thus, rejection of intuitions is self-defeating, because we wouldn’t accept it if its premises didn’t seem true.
Any time we consider any view which has some arguments both for and against it, we can only rely on our seemings to conclude which argument is stronger. For example, when deciding whether or not god exists, most would be willing to grant that there is some evidence on both sides. The probability of existence on theism is higher than on atheism, for example, because theism entails that something exists, while the probability of god being hidden is higher on atheism, because the probability of god revealing himself on atheism is zero. Thus, there are arguments on both sides, so any time we evaluate whether theism is true, we must compare the strength of the evidence on both sides. This will require reliance on seemings. The same broad principle is true for any issue we evaluate, be it religious, philosophical, or political.
Consider a series of things we take to be true which we can’t verify. Examples include the laws of logic would hold in a parallel universe, things can’t have a color without a shape, the laws of physics could have been different, implicit in any moral claim about x being bad there is a counterfactual claim that had x not occurred things would be better, and assuming space is not curved the shortest different between any two points is a straight line. We can’t verify those claims directly, but we’re justified in believing them because they seem true—we can intuitively grasp that they are justified.
The basic axioms of reasoning also offer an illustrative example. We are justified in accepting induction, the reliability of the external world, the universality of the laws of logic, the axioms of mathematics, and the basic reliability of our memory, even if we haven’t worked out rigorous philosophical justifications for those things. This is because they seem true.
Our starting intuitions are not always perfect, and they can be overcome by other things that seem true.
Maybe you’re not a phenomenal conservative. Perhaps you think that in some cases, intuitions don’t serve as justification. However, we should all accept the following more modest principle.
Wise Phenomenal Conservatism: If P seems true upon careful reflection from competent observers, that gives us some prima facie reason to believe P.
This allows us to sidestep the main objections to phenomenal conservatism listed here.
Responding to the crazy appearances objection
Some critics have worried that phenomenal conservatism commits us to saying that all sorts of crazy propositions could be non-inferentially justified. Suppose that when I see a certain walnut tree, it just seems to me that the tree was planted on April 24, 1914 (this example is from Markie 2005, p. 357). This seeming comes completely out of the blue, unrelated to anything else about my experience – there is no date-of-planting sign on the tree, for example; I am just suffering from a brain malfunction. If PC is true, then as long as I have no reason to doubt my experience, I have some justification for believing that the tree was planted on that date.
More ominously, suppose that it just seems to me that a certain religion is true, and that I should kill anyone who does not subscribe to the one true religion. I have no evidence either for or against these propositions other than that they just seem true to me (this example is from Tooley 2013, section 5.1.2). If PC is true, then I would be justified (to some degree) in thinking that I should kill everyone who fails to subscribe to the “true” religion. And perhaps I would then be morally justified in actually trying to kill these “infidels” (as Littlejohn [2011] worries).
But in the case of a person to whom a certain religion seems true, this is no doubt not after careful, prolonged rational reflection in which they consider all of the facts. If a very rational person considered all the facts and religion still seemed to have prima facie justification, it seems they would be justified in thinking religion is true. This objection is also diffused by Huemer’s responses to it.
Phenomenal conservatives are likely to bravely embrace the possibility of justified beliefs in “crazy” (to us) propositions, while adding a few comments to reduce the shock of doing so. To begin with, any actual person with anything like normal background knowledge and experience would in fact have defeaters for the beliefs mentioned in these examples (people can’t normally tell when a tree was planted by looking at it; there are many conflicting religions; religious beliefs tend to be determined by one’s upbringing; and so on).
We could try to imagine cases in which the subjects had no such background information. This, however, would render the scenarios even more strange than they already are. And this is a problem for two reasons. First, it is very difficult to vividly imagine these scenarios. Markie’s walnut tree scenario is particularly hard to imagine – what is it like to have an experience of a tree’s seeming to have been planted on April 24, 1914? Is it even possible for a human being to have such an experience? The difficulty of vividly imagining a scenario should undermine our confidence in any reported intuitions about that scenario.
The second problem is that our intuitions about strange scenarios may be influenced by what we reasonably believe about superficially similar but more realistic scenarios. We are particularly unlikely to have reliable intuitions about a scenario S when (i) we never encounter or think about S in normal life, (ii) S is superficially similar to another scenario, S’, which we encounter or think about quite a bit, and (iii) the correct judgment about S’ is different from the correct judgment about S. For instance, in the actual world, people who think they should kill infidels are highly irrational in general and extremely unjustified in that belief in particular. It is not hard to see how this would incline us to say that the characters in Tooley’s and Littlejohn’s examples are also irrational. That is, even if PC were true, it seems likely that a fair number of people would report the intuition that the hypothetical religious fanatics are unjustified.
A further observation relevant to the religious example is that the practical consequences of a belief may impact the degree of epistemic justification that one needs in order to be justified in acting on the belief, such that a belief with extremely serious practical consequences may call for a higher degree of justification and a stronger effort at investigation than would be the case for a belief with less serious consequences. PC only speaks of one’s having some justification for believing P; it does not entail that this is a sufficient degree of justification for taking action based on P.
There’s certainly much more to be said on this topic, only a minuscule portion of which I can discuss in this article. However, in philosophy, it’s pretty widely accepted that what seems to be the case probably is the case, all else equal, in at least most cases. One can accept epistemic particularism, for example, and still accept this modest requirement.
Responding to the alleged defeaters in the moral domain
Walter Sinnott-Armstrong argues that we need extra justification in some sorts of cases. If a person had a belief in a proposition purely as a result of self-interested motivated reasoning, their seeming wouldn’t be justified. Thus, he argues a constraint for accepting a belief to garner prima facie justification is the following
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
However, as Ballantyne and Thurrow note, this isn’t a blanket defeater for our moral beliefs; rather, this is only a defeater for the subset of our moral beliefs that are likely to be caused in some way by partial considerations.
So now the question is whether, in regards to the specific thought experiments I’ll appeal to in defending moral realism, they are plausibly caused by partiality. We’ll investigate this more in regard to the specific thought experiments that I’ll appeal to.
However, one thing is worth noting. Utilitarianism seems to have a plausible route to avoiding these objections. Utilitarianism is frequently chided for being too demanding, for being too impartial. Thus, this gives us a good reason to revise the intuitions of utilitarianism’s rivals, though not utilitarianism.
This principle is also too broad. Let’s imagine that all people had self-interested reasons to believe in core logical or mathematical facts. This wouldn’t mean we should reject the core truth of modus ponens or the core mathematical axioms. Perhaps it would undercut the intuition, but it wouldn’t be enough to totally eliminate the intuition.
This is one worry I have with Armstrong’s approach. He seems much too willing to divide intuitions into two distinct classes: justified and unjustified. However, justification comes in degrees. Declaring an intuition flat out justified or flat out unjustified seems to be a mistake — just like declaring a food hot or cold would be unwise, if one were attempting to make precise judgments about the average temperature of a room.
Armstrong’s next constraint is the following.
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Several points are worth making. First, the intuitions I’m appealing to are very widespread — not many people lack the intuitions to which I’ll appeal. Perhaps some people end up reflectively rejecting those intuitions, but people tend to have the intuitions. Thus, we need not revise these intuitions in light of those who disagree. I’ll defend this more later.
Second, given that most philosophers are moral realists, it seems that most relevant domain experts find the intuitions appealing. If they didn’t, they almost surely wouldn’t be moral realists.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
All of our decisions are clouded by emotion to some degree. That does not mean that we should abandon all of our judgments. Again, rather than seeing things as a yes/no question of whether or not our intuitions are justified, it makes far more sense to see justification as coming in degrees. The more emotional we are, the less we should trust our intuitions. However, we shouldn’t throw out all of our intuitions based merely on our omnipresent emotions.
Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.
With my traditional caveat about justification in coming in degrees, this seems mostly correct.
Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source
Ibid
2 Some Intuitions That Support Moral Realism
The most commonly cited objection to moral anti-realism in the literature is that it’s unintuitive. There is a vast wealth of scenarios in which anti-realism ends up being very counterintuitive. We’ll divide things up more specifically; each particular version of anti-realism has special cases in which it delivers exceptionally unintuitive results. Here are two cases
This first case is the thing that convinced me of moral realism originally. Consider the world as it was at the time of the dinosaurs before anyone had any moral beliefs. Think about scenarios in which dinosaurs experienced immense agony, having their throats ripped out by other dinosaurs. It seems really, really obvious that that was bad.
The thing that’s bad about having one’s throat ripped out has nothing to do with the opinions of moral observers. Rather, it has to do with the actual badness of having one’s throat ripped out by a T-Rex. When we think about what’s bad about pain, anti-realists get the order of explanation wrong. We think that pain is bad because it is — it’s not bad merely because we think it is.
The second broad, general case is of the following variety. Take any action — torturing infants for fun is a good example because pretty much everyone agrees that it’s the type of thing you generally shouldn’t do. It really seems like the following sentence is true
“It’s wrong to torture infants for fun, and it would be wrong to do so even if everyone thought it wasn’t wrong.”
Similarly, if there were a society that thought that they were religiously commanded to peck out the eyes of infants, they would be doing something really wrong. This would be so even if every single person in that society thought it wasn’t wrong.
Everyone could think it’s okay to torture animals in factory farms, and it would still be horrifically immoral.
This becomes especially clear when we consider moral questions that we’re not sure about. When we try to make a decision about whether abortion is wrong, or eating meat, we’re trying to discover, not invent, the answer. If the answer were just whatever we or someone else said it was — or if there was no answer — then it would make no sense to deliberate about whether or not it was wrong.
Whenever you argue about morality, it seems you are assuming that there is some right answer — and that answer isn’t made sense by anyone’s attitude towards it.
Let’s see whether these results can be debunked as a result of biasing factors.
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
I’m not particularly partial about whether the dinosaur’s suffering was bad. It has little emotional impact on me and I am not a dinosaur. Additionally, I’m not very partial on the question of whether torturing infants would be wrong even if everyone thought it wasn’t wrong — this will never affect me, and the moral facts themselves are causally inert. Thus, this judgment can’t be debunked by partiality considerations.
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Very few people disagree, at least based on initial intuitions, with the judgments I’ve laid out. I did a small poll of people on Twitter, asking the question of whether it would be wrong to torture infants for fun, and would be so even if no one thought it was. So far, 82.6% of people have been in agreement.
There are some people who disagree. However, there is almost inevitable disagreement. If disagreement made us abandon our beliefs, we’d abandon our beliefs in political claims, because there’s way more disagreement about political claims than there is about the claim that it’s typically wrong to torture infants for fun.
Also, those who disagree tend to have views that I think are factually mistaken on independent grounds. Anti-realists seem more likely to adopt other claims that I find implausible. Additionally, they tend to make the error of not placing significant weight on moral intuitions. Thus, I think we have independent reasons to prefer the belief in realism.
It also seems like a lot of the anti-realists who don’t find the sentence “it’s typically wrong to torture infants for fun and would be so even if everyone disagreed” intuitive, tend to be confused about what moral statements mean — about what it means to say that things are wrong. I, on the other hand, like most moral realists, and indeed many anti-realists, understand what the sentence means. Thus, I have direct acquaintance to the coherence of moral sentences — I directly understand what it means to say that things are bad or wrong.
If it turned out that a lot of the skeptics of quantum mechanics just turned out to not understand the theory, that would give us good reason to discount their views. This seems to be pretty much the situation in the moral domain.
Additionally, given that most philosophers are moral realists, we have good reason to find it the more intuitively plausible view. If the consensus of people who have carefully studied an issue tends to support moral realism, this gives us good reason to think that moral realism is true. The wisdom of the crowds tends to be greater than that of any individual.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
I’m really not particularly emotional about the notion that dinosaur suffering was bad. Nor do I have a particularly strong emotional reaction to some types of wrong actions, say tax fraud. If there was a type of tax fraud that decreased aggregate utility, I’d think it was wrong, even if everyone thought it wasn’t. I have no emotional attachment to that belief.
Additionally, we have good evidence from the dual process literature that careful, prolonged reflection tends to be what causes utilitarian beliefs — it’s the unreliable emotional reactions that causes our non-utilitarian beliefs. Thus, at best, this would give a reason to revise our non-utilitarian beliefs. I’ll quote an article I wrote on the subject.
One 2012 study finds that asking people to think more makes them more utilitarian. When they have less time to think, they become conversely less utilitarian. If reasoning lead to utilitarianism, this is what we’d expect. More time to reason would make people proportionately more utilitarian.
A 2021 study, compiling the largest available dataset, concluded across 8 different studies that greater reasoning ability is correlated with being more utilitarian. The dorsolateral prefrontal cortex’s length correlates with general reasoning ability. It’s length also correlates with being more utilitarian. Coincidence? I think not1.
Yet another study finds that being under greater cognitive pressure makes people less utilitarian. This is exactly what we’d predict. Much like being under cognitive strain makes people less likely to solve math problems correctly, it also makes them less likely to solve moral questions correctly. Correctly being in the utilitarian way.
Yet the data doesn’t stop there. A 2014 study found a few interesting things. It looked at patients with damaged VMPC’s—a brain region responsible for lots of emotional judgments. It concluded that they were far more utilitarian than the general population. This is exactly what we’d predict if utilitarianism were caused by good reasoning and careful reflection, and alternative theories were caused by emotions. Inducing positive emotions in people conversely makes them more utilitarian—which is what we’d expect if negative emotions were driving people not to accept utilitarian results.
Additionally, there are lots of moral results that seem to be backed by no emotional results. For example, I accept the repugnant conclusion, though I have no emotional attachment to doing so.
Principle 4: confirmation is needed for a believer to be justified when the circumstances are conducive to illusion.
We have no reason to think that beliefs in the moral domain — particularly ones that reach reflective equilibrium — are particularly susceptible to illusion. This is especially true of the consequentialist ones.
Principle 5: confirmation is needed for a believer to be justified when the belief arises from an unreliable or disreputable source
This isn’t true of moral belief. The belief that dinosaur suffering was bad, even before any person had ever formed that thought, was formed through careful reflection on the nature of their suffering — it wasn’t on the basis of anything else.
What if the folk think differently
I’m supremely confident that if you asked the folk whether it would be typically wrong to torture infants for fun, even if no one thought it was, they’d tend to say yes. Additionally, it turns out that The Folk Probably do Think What you Think They Think.
Also, I trust the reflective judgment of myself and qualified philosophers significantly more than I trust the folk. Sorry folk!
Classifying anti-realists
Given that, as previously discussed, moral realism is the view that there are true moral statements, that are true independently of people’s beliefs about them, there are three ways to deny it.
Non-cognitivism — this says that moral statements are neither true nor false; they’re not in the business of being true or false. On this view, moral statements are not truth-apt, in that they can’t be true or false. There are lots of sentences that are not truth-apt — examples include “shut the door.” Shut the door isn’t true or false.
Error theory — this says that moral statements, much like statements about witches, try to state facts, but they are systematically false. For example, if a person says ‘witches can fly and cast spells’ they think they’re saying something true, but they falsely believe in a vast category of things that aren’t real, namely, witches. Thus, all positive statements about morality, much like all positive statements about witches according to error theory, turn out to be false.
Subjectivism — this says that moral statements are hinge on people’s attitudes towards them. There are different versions of subjectivism — they’re all implausible.
It turns out that each of these views has especially implausible results, ones not shared by the other three.
Non-cognitivism
Non-cognitivists think that moral statements are not truth apt. A non-cognitivist might think that saying murder is wrong really means boo! murder, or don’t murder! I’ve already explained why I think non-cognitivism is super implausible, which I’ll quote here.
“It’s wrong to torture infants for fun, most of the time,”
is neither true nor false
The statement
“If it’s wrong to torture infants, then I shouldn’t torture infants
It’s wrong to torture infants
Therefore, I shouldn’t torture infants”
Is incoherent. It’s like saying if shut the door then open the window, shut the door, therefore, open the window.
Additionally, as Huemer says on pages 20-21, describing the reasons to think moral statements are propositional
(a) Evaluative statements take the form of declarative sentences, rather than, say, imperatives, questions, or interjections. ‘Pleasure is good’ has the same grammatical form as ‘Weasels are mammals’. Sentences of this form are normally used to make factual assertions. )] In contrast, the paradigms of non-cognitive utterances, such as ‘Hurray for x’ and ‘Pursue x’, are not declarative sentences.
(b) Moral predicates can be transformed into abstract nouns, suggesting that they are intended to refer to properties; we talk about ‘goodness’, ‘rightness’, and so on, as in ‘I am not questioning the act’s prudence, but its rightness’.
(c) We ascribe to evaluations the same sort of properties as other propositions. You can say, ‘It is true that I have done some wrong things in the past’, ‘It is false that contraception is murder’, and ‘It is possible that abortion is wrong’. ‘True’, ‘false’, and ‘possible’ are predicates that we apply only to propositions. No one would say, ‘It is true that ouch’, ‘It is false that shut the door’, or ‘It is possible that hurray’.
(d) All the propositional attitude verbs can be prefixed to evaluative statements. We can say, ‘Jon believes that the war was just’, ‘I hope I did the right thing’, ‘I wish we had a better President’, and ‘I wonder whether I did the right thing’. In contrast, no one would say, ‘Jon believes that ouch’, ‘I hope that hurray for the Broncos’, ‘I wish that shut the door’, or ‘I wonder whether please pass the salt’. The obvious explanation is that such I11ental states as believing, hoping, wishing, and wondering are by their nature propositional: To hope is to hope that something is the case, to wonder is to wonder whether something is the case, and so on. That is why one cannot hope that one did the right thing unless there is a proposition-something that might be the case-corresponding to the expression ‘one did the right thing’.
(e) Evaluative statements can be transformed into yes/no questions: One can assert ‘Cinnamon ice cream is good’, but one can also ask, ‘Is cinnamon ice cream good?’ No analogous questions can be formed from imperatives or emotional expressions: ‘Shut the door?’ and ‘Hurray for the Broncos?’ lack clear meaning. The obvious explanation is that a yes/no question requires a proposition; it asks whether something is the case. A prescriptivist non-cognitivist might interpret some evaluative yes/no questions as requests for instruction, as in ‘Should I shut off the oven now?’ But other questions would defy interpretation along these lines, including evaluative questions about other people’s behavior or about the past-t—Was it wrong for Emperor Nero to kill Agrippina?′ is not a request for instruction.
(f) One can issue imperatives and emotional expressions directed at things that are characterized morally. If non-cognitivism is true, what do these mean: ‘Do the right thing.’ ‘Hurray for virtue!’ Even more puzzlingly for the non-cognitivist, you can imagine appropriate contexts for such remarks as, ‘We shouldn’t be doing this, but I don’t care; let’s do it anyway’. This is perfectly intelligible, but it would be unintelligible if ‘We shouldn’t be doing this’ either expressed an aversive emotion towards the proposed action or issued an imperative not to do it.
(g) In some sentences, evaluative terms appear without the speaker’s either endorsing or impugning anything, yet the terms are used in their normal senses. This is known as the Frege-Geach problem and forms the basis for perhaps the best-known objection to noncognitivism.
Error Theory
Error theory says that all positive moral statements are false. Error theory is best described as in error theory, because of how sharply it diverges from the truth. It runs into a problem — there are obviously some true moral statements. Consider the following six examples.
What the icebox killers did was wrong.
The holocaust was immoral.
Torturing infants for fun is typically wrong.
Burning people at the stake is wrong.
It is immoral to cause innocent people to experience infinite torture.
Pleasure is better than pain.
The error theorist has to say that the meaning of those terms is exactly the same as what the realist thinks. The error theorist has to think that when people say the holocaust is bad, they’re actually making a mistake. However, this is terribly implausible. It really, really doesn’t seem like the claim ‘the holocaust is bad’ is mistaken.
Any argument for error theory will be way less intuitive than the notion that the Holocaust was, in fact, bad.
Let’s test these intuitions.
Principle 1: confirmation is needed for a believer to be justified when the believer is partial.
I’m not really that partial about many things I take to be bad. I think malaria is bad, despite not being personally affected by malaria. Similarly, I am in no way harmed by most of history’s evils — including hypothetical evils that have never been experienced, but that I recognized would be bad if experienced.
On top of this, this may be a reason to rethink the intuition somewhat, but it’s certainly not a reason to just throw out any intuition stemming from
Principle 2: confirmation is needed for a believer to be justified when people disagree with no independent reason to prefer one belief or believer to the other.
Very few people disagree that the notion that it’s wrong to cause infinite torture is intuitive.
Principle 3: confirmation is needed for a believer to be justified when the believer is emotional in a way that clouds judgment.
Being emotional does reduce the probative force of intuitions. However, it does not suffice to debunk an intuition — we cannot merely disregard intuitions because there’s some emotional impact. But also, I’m not particularly emotional when I consider suffering in the abstract. It still seems clearly bad.
The responses to four and five from above still apply.
Subjectivism
Subjectivism holds that moral facts depend on some people’s beliefs or desires. This could be the desires of a culture — if so, it’s called cultural relativism.
Cultural Relativism: Crazy, Illogical, and Accepted by no One Except Philosophically Illiterate Gender Studies Majors
Cultural relativism is — as the sub-header suggested — something that I find rather implausible. There are no serious philosophers that I know of who defend cultural relativism. One is a cultural relativist if they think that something is right if a society thinks that it is right.
Problem: it’s obviously false. Consider a few examples.
Imagine the Nazis convinced everyone that their holocaust was good. This would clearly not make it good.
Imagine there was a society that was in universal agreement that all babies should be tortured to death in a maximally horrible and brutal way. That wouldn’t be objectively good.
People often accept cultural relativism because they’re vaguely confused and want to be tolerant. But if cultural relativism is true, then tolerance is only good if supported by the broader culture. On cultural relativism, disagreeing with the norms of one’s broader culture is incoherent. Saying my culture is acting wrongly is just a contradiction in terms. Yet that’s clearly absurd.
This also means that if two different cultures argue about which norm is correct, they’re arguing about nothing. If norms are relative to a culture then there’s no fact of the matter about which culture is correct. But that’s absurd; the Nazis were worse than non-Nazis.
To quote my previous article on the subject
If it’s determined by society the following statements are false
“My society is immoral when it tortures infants for fun.”
“Nazi Germany acted immorally.”
“Some societal practices are immoral.”
“When society chops off the fingers and toes of small children based on their skin color, that’s immoral.”
“It’s immoral for society to boil children in pots.”
Individual Subjectivism
Individual subjectivism says that morality is determined by the attitude of the speaker. The statement murder is wrong means “I disapprove of murder.” There are, of course, more subtle versions, but this is likely the main version.
I’ve already given objections in my previous article on the subject.
If it’s determined by the moral system of the speaker the following claims are true.
“When the Nazi whose ethical system held that the primary ethical obligation was killing jews said “It is moral to kill jews,”” they were right.
“When slave owners said ‘the interests of slaves don’t matter,’ they were right.”
“When Caligula says “It is good to torture people,” and does so, he’s right”
“The person who thinks that it’s good to maximize suffering is right when he says “it’s moral to set little kids on fire””
Additionally, when I say “we should be utilitarians,” and Kant says “we shouldn’t be utilitarians,” we’re not actually disagreeing.
Conclusion of this section
So, I think that the moral conclusions of moral anti-realism are absurd. It holds that wrongness either isn’t real or depends on our desires in some way. But that’s just wrong! It is well and truly wrong to torture infants to death, and it would be so even if no one agreed.
3 Irrational Desires
The fool says in his heart ‘I have future Tuesday indifference.’
The argument I intend to lay out is relatively simple in its essence, relatively drab, and yet quite forceful.
1 If moral realism is not true, then we don’t have irrational desires
2 We do have irrational desires
Therefore, moral realism is true
Defending premise 1
Premise one seems the most controversial to laypersons, but it is premise 2 that is disputed by the philosophical anti-realists. Morality is about what we have reason to do — impartial reason, to be specific. These reasons are not dependent on our desires.
Morality thus describes what reasons we have to do things, unmoored from our desires. When one claims it’s wrong to murder, they mean that, even were one to desires murdering another, they shouldn’t do it — they have a reason not to do it, independent of desires.
Thus, the argument for premise one is as follows.
1 If there are desire independent reasons, there are impartial desire independent reasons
2 If there are impartial desire independent reasons, morality is objective
Therefore, morality is objective.
Premise 2 is true by definition. Premise 1 is trivial — impartial desire independent reasons are just identical to non-impartial desire independent reasons, but adding in a requirement of impartiality. This can be achieved by, for example, making decisions from behind the veil of ignorance — or some other similar system.
Thus, if you actual have reasons to have particular desires — to aim for particular things, then morality is objective. Let’s now investigate that assumption.
Defending Premise 2
Premise 2 states that there are, in fact, irrational desires. This premise is obvious enough.
Note here I use desire in a broad sense. By desire I do not mean what merely enjoys; that obviously can’t be irrational. My preference for chocolate ice-cream over vanilla ice cream clearly cannot be in error. Rather, I use desire in a broad sense to indicate one’s ultimate aims, in light of the things that they enjoy. I’ll use desire, broad aims, goals, and ultimate goals interchangeably.
Thus, the question is not whether one who prefers chocolate to vanilla is a fool. Instead, it’s whether someone who prefers chocolate to vanilla but gets vanilla for no reason is acting foolishly.
The anti-realist is in the difficult position of denying one of the most evident facts of the human condition — that we can be fools not merely in how we get what we want but in what we want in the first place.
Consider the following cases.
1 Future Tuesday Indifference2: A person doesn’t care what happens to them on a future Tuesday. When Tuesday rolls around, they care a great deal about what happens to them; they’re just indifferent to happenings on a future Tuesday. This person is given the following gamble — they can either get a pinprick on Monday or endure the fires of hell on Tuesday. If they endure the fires of hell on Tuesday, this will not merely affect what happens this Tuesday — every Tuesday until the sun burns out shall be accompanied by unfathomable misery — the likes of which can’t be imagined, next to which the collective misery of history’s worst atrocities is but a paltry, vanishing scintilla.
They know that when Tuesday rolls around, they will shriek till their vocal chords are destroyed, for the agony is unendurable (their vocal chords will be healed before Wednesday, so they shall only suffer on Tuesday). They shall cry out for death, yet none shall be afforded to them.
Yet they already know this. However, they simply do not care what happens to them on Tuesday. They do not dissociate from their Tuesday self — they think they’re the same person as their Tuesday self. However, they just don’t care what happens to themself on Tuesday.
Now you might be tempted to imagine that they don’t actually mind what happens on Tuesday — after all, they’re indifferent to what happens on Tuesday. This misses the case; they are only indifferent to what happens on future Tuesdays. When Tuesday rolls around, they will fiercely regret their decision. Yet after Tuesday is done, they will be glad that they made the decision — after all, they don’t care what happens on a future Tuesday. We can even stipulate that when it’s Tuesday, they’re hypnotized to believe it’s a Monday, so their suffering feels from the inside exactly and precisely as it would were it experienced on Monday.
This person with indifference to future Tuesdays is clearly making an error. This is not a minor, menial error. In fact, this is certainly the gravest error in human history — one which inflicts more misery than any other. However, the anti-realist must insist that, not only is it not the greatest error in human history, it isn’t an error at all.
After all, the person is making no factual error — they are perfectly aware that they will suffer on a future Tuesday. On the anti-realist account, where lies their error. They know they will suffer, yet they do not care — the suffering will be on a Tuesday.
Only the moral realist can account for their error — for their irrationality and great foolishness in aiming at unfathomable misery on Tuesday, rather than a pinprick on Monday. On the realist account — or at least the sensible realist account, no doubt some crazy natural law theorists would deny this — we all have reason to avoid future agony. This explains why it would be an error to subject oneself to infinite torture on a Tuesday. The fact that it’s a Tuesday gives one no reason to discount their suffering.
Now the anti-realist could try to avoid this by claiming that a decision is irrational if one will regret it. However, this runs into three problems.
First, if anti-realism is true then we have no desire independent reason to do things. It doesn’t matter if we’ll regret them. Thus, regrettably, this criteria fails. Second, by this standard both getting the pinprick on a single Monday and the hellish torture on Tuesday would be irrational, because the person who experiences them will regret each of them at various points. After all, on all days of the week except Tuesday, they’d regret making the decision to endure a Monday pinprick. Third, even if by stubbornness they never swayed in their verdict, that would in no way change whether they choose rightly.
2 Picking Grass: Suppose a person hates picking grass — they derive no enjoyment from it and it causes them a good deal of suffering. There is no upside to picking grass, they don’t find it meaningful or causing of virtue. This person simply has a desire to pick grass. Suppose on top of this that they are terribly allergic to grass — picking it causes them to develop painful ulcers that itch and hurt. However, despite this, and despite never enjoying it, they spend hours a day picking grass.
Is the miserable grass picker really making no error? Could there be a conclusion more obvious than that the person who picks grass all day is acting the fool — that their life is really worse than one whose life is brimming with meaning, happiness, and love?
3 Left Side Indifference: A person is indifferent to suffering that’s on the left side of their body. They still feel suffering on the left side of their body just as vividly and intensely as it would be on the right side of their body. Indeed, we can even imagine that they feel it a hundred times more vividly and intensely — it wouldn’t matter. However, they do not care about the left side suffering.
It induces them to cry out in pain, it is agony after all. But much like agony that one endures for a greater purpose, the agony one endures on a run say, they do not think it is actually bad. Thus, this person has a blazing iron burn the left side of their body from head to toe, inflicting profound agony. They cry out in pain as they do it. On the anti-realist account, they’re acting totally rationally. Yet that’s clearly crazy!
4 Four-Year-Old Children: Suppose that — and this is not an implausible assumption — there’s a four-year-old child who doesn’t want to go into a Doctor’s office. After all, they really don’t like shots. This child is informed of the relevant facts — if they don’t go into the Doctor’s office, they will die a horribly painful death of cancer. You clearly explain this to them so that they’re aware of all the relevant facts. However, the four-year-old still digs in their heels (I hear they tend to do that) and refuses categorically to go into the Doctor’s office.
It’s incredibly obvious that the four-year-old is being irrational. Yet they’ve been informed of the relevant facts and are acting in accordance with their desires. So on anti-realism, they’re being totally rational.
5 Cutting: Consider a person who is depressed and cuts themself. When they do it, they desire to cut themself. It’s not implausible that being informed of all the relevant facts wouldn’t make that desire go away. In this case, it still seems they’re being irrational.
6 Consistent Anorexia: A person desires to be thin even if it brings about their starvation. This brings them no joy. They starve themself to death. It really seems that they’re being irrational.
7 A person had consensual homosexual sex. They then become part of a religious cult. This religious cult doesn’t have any factual mistakes, they don’t believe in god. However, they think that homosexual sex is horrifically immoral and those who do it deserve to suffer, just as a base moral principle. On the anti-realist account, not only are they not mistaken, they would be fully rational to endure infinite suffering because they think they deserve it.
8 A person wants to commit suicide and know all the relevant facts. Their future will be very positive in terms of expected well-being. On anti-realism, it would be rational to commit suicide.
9 A person is currently enduring more suffering than anyone ever has in all of human history. However, while this person doesn’t enjoy suffering — they experience it the same way the rest of us do, they have a higher order indifference to it. While they hate their experience and cry out in agony, they don’t actually want their agony to end. They don’t care on a higher level. On this account, they have no reason to end their agony. But that’s clearly implausible.
10 A person doesn’t care about suffering if it comes from their pancreas. Thus, they’re in horrific misery, but it comes from their pancreas so they do nothing to prevent it, instead preventing a miniscule amount of non-pancreas agony. On anti-realism, they’ve made no error. But that’s crazy!
4 The Discovery Argument
One of the arguments made for mathematical platonism is the argument from mathematical discovery. The basic claim is as follows; we cannot make discoveries in purely fictional domains. If mathematics was invented not discovered, how in the world would we make mathematical discoveries? How would we learn new things about mathematics — things that we didn’t already know?
Well, when it comes to normative ethics, the same broad principle is true. If morality really were something that we made up rather than discovered, then it would be very unlikely that we’d be able to reach reflective equilibrium with our beliefs — wrap them up into some neat little web.
But as I’ve argued at great length, we can reach reflective equilibrium with our moral beliefs — they do converge. We can make significant moral discovery. The repugnant conclusion is a prime example of a significant moral discovery that we have made.
Thus, there are two facts about moral discovery that favor moral realism.
First, the fact that we can make significant numbers of non-trivial moral discoveries in the first place favors it — for it’s much more strongly predicted on the realist hypothesis than the anti-realist hypothesis.
Second, the fact that there’s a clear pattern to the moral convergence. Again, this is a hugely controversial thesis — and if you don’t think the arguments I’ve made in my 36-part series are at least mostly right, you won’t find this persuasive. However, if it turns out that every time we carefully reflect on a case it ends up being consistent with some simple pattern of decision-making, that really favors moral realism.
Consider every other domain in which the following features are true.
1 There is divergence prior to careful reflection.
2 There are persuasive arguments that would lead to convergence after adequate ideal reflection.
3 Many people think it’s a realist domain
All other cases which have those features end up being realist. This thus provides a potent inductive case that the same is true of moral realism.
5 The argument from phenomenal introspection
Credit to Neil Sinhababu for this argument.
If we have an accurate way of gaining knowledge and this method informs us of moral realism, then this gives us a good reason to be a moral realist, in much the same way that, if a magic 8 ball was always right, and it informed us of some fact, that would give us good reason to believe the fact.
Neil Sinhababu argues that we have a reliable way to gain access to a moral truth — this way is phenomenal introspection. Phenomenal introspection involves reflecting on a mental state and forming beliefs about what its like. Here are examples of several beliefs formed through phenomenal introspection.
My experience of the lemon is brighter than my experience of the endless void that I saw recently.
My experience of the car is louder than my experience of the crickets.
My experience of having my hand set on fire was painful.
We have solid evolutionary reason to expect phenomenal introspection to be reliable — after all, beings who are able to form reliable beliefs about their mental states are much more likely to survive and reproduce than ones that are not. We generally trust phenomenal introspection and have significant evidence for its reliability.
Thus, if we arrive at a belief through phenomenal introspection, we should trust it. Well, it turns out that through phenomenal introspection, we arrive at the belief that pleasure is good. When we reflect on what it’s like to, for example, eat tasty food, we conclude that it’s good. Thus, we are reliably informed of a moral fact.
Lance Bush has written a response to an article I wrote about this argument; I’ll address his response here.
I summarize Sinhababu’s argument as follows.
Premise 1: Phenomenal introspection is the only reliable way of forming moral beliefs.
Premise 2: Phenomenal introspection informs us of only hedonism
Conclusion: Hedonism is true…and pleasure is the only good.
However, we can ignore premise one, because it serves as a reason other methods are unreliable — not as a reason phenomenal introspection is reliable. Lance says
I have a lot of concerns with (1), given that I don’t know what is meant by a “moral belief”
I take a moral belief to be a belief about what is right and wrong, or what one should or shouldn’t do, or about what is good and bad. Morality is fundamentally about what we have impartial reason to do, independent of our desires. For more on this definition, I’d recommend reading Parfit’s On What Matters.
I’d also note that it’s strange to frame P1 as a claim about a reliable way to form moral beliefs, since “reliable” doesn’t seem connected to whether the beliefs in question are true or not. After all, one can have a system that “reliably” (in some sense) produces false beliefs. This premise might be rephrased as something like “Phenomenal introspection is the only way to reliably form true moral beliefs” or something like that. I’m not sure; perhaps Bentham’s bulldog could update or refine the premises in a future post or in a response to this post.
By reliable, I meant reliably true.
However, my initial reaction is to reject (2) because it seems like Sinhababu overestimates what kinds of information is available via introspection on one’s phenomenology, at least not without bringing in substantial background assumptions that aren’t themselves part of the experience or that might have a causal influence on the nature of the experience. It’s possible, for instance, that a commitment to or sympathy towards moral realism can influence one’s experiences in such a way that those experiences seem to confirm or support one’s realist views, when in fact it’s one’s realist views causing the experience. Since people lack adequate introspective access to their unconscious psychological processes, introspection may be an extraordinarily unreliable tool for doing philosophy.
Lance here criticizes some types of introspection — however, none of this is phenomenal introspection. People are good at forming reliable beliefs about their experiences, less good at forming reliable beliefs about, for example, their emotions. Not all introspection is alike.
Philosophers may think that they can appeal to theoretically neutral “seemings” to build philosophical theories, but not appreciate that the causal linkages cut both ways, and that their philosophical inclinations, built up over years of studying academic philosophy, can influence how they interpret their experiences, and do so in a way that isn’t introspectively accessible. If this does occur (and I suspect it not only does, but is ubiquitous), philosophers who appeal to how things seem to support their philosophical views are, effectively, appealing to their commitment to their philosophical positions as evidence in support of their commitment to their philosophical positions. Without a better understanding of the psychological processes at play in philosophical account-building, philosophers strike me as being in an epistemically questionable situation when they so confidently appeal to their philosophical intuitions and seemings.
I think this objection to phenomenal conservatism is wrong. One can reject a seeming. For example, to me, the conclusion I describe here seems wrong, however, I end up accepting it upon reflection, because the balance of seemings supports it.
But we can table this discussion because Sinhababu doesn’t rely on seemings — he relies on phenomenal introspection.
Phenomenology involves access to what your experiences are like, but it is not constituted by any substantive philosophical inferences about those experiences. That is, if I have, say, an experience of something seeming red, it isn’t (and I think it couldn’t) be a feature of that experience that the redness of the red is, e.g., of such a kind so as to be directly (perhaps “non-inferentially”) inconsistent with a particular model of perception or consciousness. For instance, I don’t think substance dualism could be something one has phenomenal access to, but rather it would be an inference, or position one takes, that explains one’s experiences or may be inferred from one’s experiences.
No disagreement so far.
When I have good or enjoyable experiences, my phenomenology involves what I’d call positive affective states. I don’t think anything about these states includes, as a feature of the experience itself, that the experience itself involves stance-independence or stance-independence about the goodness of the experience. That doesn’t seem like the sort of thing that could be a feature of one’s phenomenology. The notion that phenomenal introspection informs us of hedonism thus strikes me almost as a kind of category error. Substantive metaphysical theses don’t seem like the sorts of things one can experience. And thus the notion that hedonism is true in a stance-independent way just isn’t the kind of thing that I think one could experience, since it’s a metaphysical thesis, not e.g., a phenomenal property (though as an aside I don’t even think there are phenomenal properties, but that’s a separate issue).
I agree that generally introspecting on experiences doesn’t inform us of their mind-independent goodness. But if we introspect on experiences that we don’t want but are pleasurable, they still feel good, showing that their goodness doesn’t depend on our desires.
Second, nothing about the phenomenology of my positive affective states is distinctively moral. If I eat my favorite food or listen to music I like, I enjoy these experiences, but they aren’t moral experiences. As such, I see no reason to think that my good and bad experiences reflect any kind of distinctively moral reality. It’s not a feature of my positive experiences that they are morally good. I don’t even know what that means, and I am confident no compelling account from any philosopher will be forthcoming.
But when you reflect on pleasure it feels good in a way that seems to give one a reason to promote it — to produce more of it. This is a distinctly moral notion. Sinhababu has a longer section on this in his paper — his account is somewhat different from mine.
Even if pleasure were “good,” and I do think positive experiences are good (in an antirealist sense), nothing about these experiences strikes me as morally good. I don’t think there is any principled distinction between moral and nonmoral norms. I think the very notion of morality is a culturally constructed pseudocategory, not a legitimate category in which normative and evaluative concepts could subsist independent of the idiosyncratic tendency for certain linguistic communities to refer to them as “moral.” So it’s not clear to me how my positive experiences relate in any meaningful way to the culturally constructed notion of moral good that persists in contemporary analytic philosophy.
Pleasure feels good in the sense that it’s desirable, worth aiming at, worth promoting. If this argument successfully establishes that pleasure is worth promoting, then it has done all that it needs to do. I don’t think morality is anything over and above a description of the things that are well and truly worth promoting.
I don’t think any of my experiences involve any distinctively moral phenomenology, and such experiences are better explained in nonmoral terms. I’d note, however, that the notion that “hedonism is true” doesn’t make clear that hedonism is the true moral theory which isn’t explicitly stated here. I don’t know if Sinhababu (or BB, or anyone else) claims to have distinctively moral phenomenology, but I don’t think that I do, and I’m skeptical that anyone else does.
This question is ambiguous, but I think the answer would be no.
In any case, if this remark: “Therefore, hedonism is true — pleasure is the only good,” … is meant to convey the notion that hedonism is true in a way indicative of moral realism, I still I am very confident that it doesn’t mean anything; that is, I think this is literally unintelligible. I find my experiences to be good, in that I consider them good, but I don’t think this in any way indicates that they are good independent of me considering them as such, nor do I think this even makes any sense.
I’d have a few things to say here.
1 It seems that most people have an intuitive sense of what it means to say something is wrong. This normal usage acquaintance is going to be more helpful than some formulaic definition that appears in a dictionary.
2 This seems rather like denying that there’s knowledge on the grounds that we don’t have a good definition of it. Things are very difficult to define — but that doesn’t mean we can’t be confident in our concepts of them. Nothing is ever satisfactorily defined.
3 I take morality to be about what we have impartial reason to aim at. In other words, what we’d aim at if we were fully rational and impartial.
Bush quotes me saying the following.
“Phenomenal introspection involves reflecting on experiences and forming beliefs about what they’re like (e.g. I conclude that my yellow wall is bright and that itching is uncomfortable).”
He responds.
But the latter isn’t part of phenomenal introspection. Only the former is. Phenomenal introspection involves reflecting on your experiences such that you have the appearance of a bright yellow wall and the sense of an itch; the beliefs you form about these experiences aren’t part of the phenomenal introspection; they’re just standard philosophical reflection, or theory-building, that seeks to account for those experiences. And while we’re all welcome to engage in such theorizing, it’s a mistake to say that those beliefs are part of phenomenal introspection itself, or that you form beliefs about what those experiences are like; what you describe instead seem like inferences about what’s true given those experiences. And such inferences aren’t part of the phenomenology.
The beliefs about what they’re like are beliefs about the experience. So, for example, the belief that hunger is uncomfortable is reliably formed through phenomenal introspection.
There are other difficulties with BB’s framing here:
Premise 2 is true — when we reflect on pleasure we conclude that it’s good and that pain is bad.
This is ambiguous. What does BB mean by ‘good’ and ‘bad’? Since I understand these in antirealist terms, if Premise 2 is taken to imply that they’re true in a realist sense, then I simply deny the premise. I find it odd and disappointing that BB would echo the common tendency for philosophers to engage in such ambiguous claims. BB knows as well as I do that one of the central disputes in metaethics is between realism and antirealism. So why would BB present a premise that only includes, on the surface, normative claims, without making the metaethical presuppositions in the claim explicit?
This was responded to above — when we reflect on pain we conclude that it’s the type of thing that’s worth avoiding, that there should be less of. We conclude this even in cases when we want pain. To give an example, I recall when I was very young wanting to be cold for some reason. I found that it still felt unpleasant, despite my desire to brave the cold.
This particular ambiguity is especially common in metaethics, and its proliferation has a clear and perfidious rhetorical value: moral realists often present normative claims, e.g., “x is good” or “it’s wrong to torture babies for fun,” without making their metaethical presuppositions explicit, e.g., “x is stance-independently good” or “it’s objectively wrong to torture babies for fun.” Yet these normative claims serve as the premises to arguments that presuppose realism, or that are intended as arguments for realism, or are intended to prompt intuitions against antirealism and in favor of realism. All of these uses are illegitimate, because they rely on the inappropriate pragmatic implicature that to reject the premise or the claim isn’t merely to reject its metaethical component (which has been concealed), but the normative claim itself.
Earlier in this article I was more precise and clarified the things that the anti-realist is committed to.
The other problem with this remark is the claim that when “we” reflect on pleasure we conclude that it’s good and that pain is bad. Who’s “we”? Not me, certainly. I don’t reach the same conclusions as BB does via introspection. BB echoes yet another bad habit of contemporary analytic philosophers: making empirical claims about how other people think without doing the requisite empirical work. BB does not have any direct access to what other people’s phenomenology is like, so there’s little justification in making claims about what things are like for other people in the absence of evidence. And there’s little empirical evidence most people claim to have phenomenology that lends itself to moral realism.
I think Lance does — he’s just terminologically confused. When he reflects on his pain, he concludes it’s worth avoiding — that’s why he avoids it! I think if he reflected on being in pain even in cases when he wanted to be in pain, he’d similarly conclude that it was undesirable.
6 Responding to Objections
A Disagreement
One common objection to moral realism is the argument from disagreement. The basic version is as follows.
Premise 1: If some domain has disagreement, then it only establishes subjective truths
Premise 2: The moral domain has disagreement
Therefore, it only establishes subjective truths
Problem: Premise 1 is obviously false. The domain of physics, mathematics, and numerous others garner lots of disagreement. They also are objective.
There are lots of more robust arguments from disagreement — however, I think the best paper on this subject by Enoch decisively refutes them.
B Access
Some worry about how we have access to the moral facts. Enoch puts these worries to rest decisively.
I think we can rather safely postpone discussion of these worries to the following subsections, without saying much more on epistemic access. This is not just because one way of understanding talk of epistemic access is as an unofficial introduction to one of the other ways of stating the challenge, or because as they stand, worries about epistemic access are too metaphorical to be theoretically helpful (it isn’t clear, after all, what ‘‘access’’ exactly means here). The more important reason why we can safely avoid further discussion of the worry put in terms of epistemic access is the following. In the following subsections, I discuss versions of the epistemological worry put in terms of justification, reliability, and knowledge. It is possible, of course, that my arguments there fail. But if they do not, what remaining epistemological worry could talk of epistemic access introduce? If in the next subsections I manage to convince you that there are no special problems with the justification of normative beliefs, with the reliability of normative beliefs, or with normative knowledge, it seems to me you should be epistemologically satisfied. I do not see how talk of epistemic access should make you worried again
Enoch similarly describes why epistemic challenges for moral realism shouldn’t be thought of in terms of justification, reliability, or knowledge. I’d recommend the full paper for an explanation of this.
C Correlation
Enoch thinks the most puzzling version of epistemological objections don’t focus on any of the things above — instead, they focus on a puzzling correlation. This correlation is between the correct moral views and the moral things we happen to believe. Enoch says
Suppose that Josh has many beliefs about a distant village in Nepal. And suppose that very often his beliefs about the village are true. Indeed, a very high proportion of his beliefs about this village are true, and he believes many of the truths about this village. In other words, there is a striking correlation between Josh’s beliefs about that village and the truths about that village. Such a striking correlation calls for explanation. And in such a case there is no mystery about how such an explanation would go—we would probably look for a causal route from the Nepalese village to Josh (he was there, saw all there is to see and remembers all there is to remember, he read texts that were written by people who were there, etc.). The reason we are so confident that there is such an explanation is precisely that the striking correlation is so striking—absent some such explanation, the correlation would be just too miraculous to believe. Utilizing such an example, Field (1989, pp. 25–30) suggests the following problem for mathematical Platonism: Mathematicians are remarkably good when it comes to their mathematical beliefs. Almost always, when mathematicians believe a mathematical proposition p, it is indeed true that p, and when they disbelieve p (or at least when they believe not-p) it is indeed false that p. There is, in other words, a striking correlation between mathematicians’ mathematical beliefs (at least up to a certain level of complexity) and the mathematical truths. Such a striking correlation calls for explanation. But it doesn’t seem that mathematical Platonists are in a position to offer any such explanation. The mathematical objects they believe in are abstract, and so causally inert, and so they cannot be causally responsible for mathematicians’ beliefs; the mathematical truths Platonists believe in are supposed to be independent of mathematicians and their beliefs, and so mathematicians’ beliefs aren’t causally (or constitutively) responsible for the mathematical truths. Nor does there seem to be some third factor that is causally responsible for both. What we have here, then, is a striking correlation between two factors that Platonists cannot explain in any of the standard ways of explaining such a correlation—by invoking a causal (or constitutive) connection from the first factor to the second, or from the second to the first, or form some third factor to both. But without such an explanation, the striking correlation may just be too implausible to believe, and, Field concludes, so is mathematical Platonism. Notice how elegant this way of stating the challenge is: There is no hidden assumption about the nature of knowledge, or of epistemic justification, or anything of the sort. There is just a striking correlation, the need to explain it, and the apparent unavailability of any explanation to the challenged view in the philosophy of mathematics.
On this, several points are worth making.
1 As Enoch points out, this is an explanatory game, so it makes sense to compare the explanatory adequacy of the theories holistically, and see if the best ones favor realism.
2 Also pointed out by Enoch, many people are in error, so the correlation isn’t that striking — it’s not as though there’s perfect correlation.
3 Our reasoning can weed out lots of views that are inconsistent — so that narrows the pool even more.
I’d also note
4 The correlation is not that striking — the correct moral view which seems to be hedonistic act utilitarianism is often wildly unintuitive.
5 Most of our beliefs tend to be right. Thus, based purely on priors, we’d expect the same broad pattern to be true when it comes to our moral beliefs.
6 The same broad arguments can be made against epistemic realism — it’s why there’s the correlation in that case too — but this doesn’t debunk our epistemic beliefs.
D Evolutionary Debunking
Street famously argued that our moral beliefs are evolutionarily debunkable — we formed them for evolutionary reasons, independent of their truth, so we shouldn’t believe them.
First, as Sinhababu points out, we’d expect evolution to make us reliable judges of our conscious experience. Belief in the badness of pain resists debunking because it’s formed through a mechanism that would evolve to be reliable. Much like beliefs about vision aren’t debunkable, neither are beliefs about our mental states, given that beings who can form accurate beliefs about their mental states are more likely to survive.
Second, as Bramble (2017) points out, evolution just requires that pain isn’t desired, it doesn’t require the moral belief that the world would be better if you didn’t suffer. Given this, there is no way to debunk normative beliefs about the badness of pain.
Third, there’s a problem of inverted qualia. As Hewitt (2008) notes, it seems eminently possible to imagine a being who sees red as blue and blue as red, without having much of a functional change. However, it seems like undesirability rigidly designates pain, such that you couldn’t have a being with an identical qualitative experience of pain, who seeks out and desires pain. This means that the badness and correlated undesiredness of pain is a necessary feature, not subject to evolutionary change.
One could object that there are many people like sadists who do, in fact, desire pain. However, when sadists are in pain, the experience they gain is one they find pleasurable. This is not a counterexample to the rule, so much as one that shows that experiences can have many features in common with pain, while lacking its intrinsic badness. A decent analogy here would be food—eating the same food at different times will produce different results, even with the same general taste. If one finds a food disgusting, their experience of eating it will be bad. Traditionally painful experiences are similar in this regard—closely related experiences can actually be desirable.
Fourth, evolution can’t debunk the direct acquaintance we have with the badness of pain, any more than it could debunk the belief that we’re conscious. Much like I have direct access to the fact that I’m conscious, I similarly have direct access to the badness of pain. After I stub my toe my conviction is much greater that the pain was bad than it is in the external world.
Fifth, it’s plausible that beings couldn’t be radically deluded about the quality of their hedonic experiences, in much the same way they can’t be deluded about whether or not they’re conscious. It seems hard to imagine an entity could have an experience of suffering but want more of it.
Sixth, there’s a problem of irreducible complexity. Pain only serves an evolutionary advantage if it’s not desired when experienced. Thus, the experience evolving by itself would do no good. Similarly, a mutation that makes a being not want to be in pain would do no good, unless it already feels pain. Both of those require the other one to be useful, so neither would be likely to emerge by themselves. However, only the intrinsic badness of pain which beings have direct acquaintance with can explain these two emerging together.
Seventh, evolution gave us the ability to do abstract, careful reasoning. This reasoning leads us to form beliefs about moral facts, in much the same way it does for mathematical facts.
E Explanatorily Unnecessary
People often object to moral realism on the grounds that the moral facts are explanatorily unnecessary. The earlier comments apply — positing real moral facts explains the convergence, for example, in our moral views. It also explains our moral seemings — seemings that inform us that, for example, it’s wrong to torture infants for fun and would be so even if nobody thought that it was.
F Objectionably Queer
Ever since the time of Mackie, it’s been objected that moral realism is objectionably queer, something about it is strange. However, it’s pretty unclear what exactly about it is supposed to be so strange. As Taylor says
Firstly, there is ‘the metaphysical peculiarity of the supposed objective values, in that they would have to be intrinsically action-guiding and motivating’; related to this is ‘the problem how such values could be consequential or supervenient upon natural features’ of the world (p. 49)
However, it’s not clear why exactly this is so queer. As Huemer notes, many things are very different from everything else. Time is very different from other things, as is space, as are laws of physics — but we shouldn’t give up our belief in those things.
On top of this, it’s not clear why normativity is queer. There seem to be other things that are irreducibly normative — epistemic normativity seems on firm ground. One who believes the earth is flat on the basis of the available evidence is objectively making an epistemic error and, in an epistemic sense, they ought to change their views. None of this seems too queer.
Mackie just describes what morality is, before declaring that it’s too queer.
If you look at the attitudes of most everyday people towards the notion that it’s really wrong to torture infants for fun — it doesn’t seem strange at all to them.
Additionally, if one is too concerned about queerness, I think hedonism gives a particularly promising route for avoiding such worries. To quote my book.
There are several ways the the hedonic facts resist the charge of being objectionably queer. The first one is that our mental states are already very queer. If one assessed the odds that a universe made up of particles and waves, matter and energy, could sustain the smorgasbord of truly bizarre mental states that exist—that some mental states were normative would be one of the least surprising facts. If we start with the fundamental strangeness that there’s any consciousness at all—somehow generated by neurons—and then we combine that with the bizarreness of the following mental states: color qualia—particularly when we consider that there are color qualia that no human will ever see but that non-human animals have seen, psychedelic experiences, the intrinsic motivation that comes with the experience of desire, the strangeness of taste qualia, and the fact that there are literal entire dimensions that we will never experience.
Once we become accustomed to these mental states, it’s very easy to no longer appreciate just how strange they are. Yet if we imagine what the mental states that we haven’t experienced must be like—for example the experience of a bat using nociception or of experiencing four dimensional objects, it becomes clear just how miraculous and bizarre our conscious experiences are. Thus, if something as strange as value were to lurk anywhere in the universe, the obvious place for it to be would be part of experience, alongside its equally strange brethren.
Yet there’s another account of why normative qualia wouldn’t be objectionably strange—namely, that the supposedly strange feature of qualia, their normativity, is something that we commonly accept. Every time a person makes a decision on account of something they know, they are treating their mental states as normative—they take particular facts or experiences of which they’re aware to count either for or against an act.
Take one simple example—when one puts their hand on a hot stove, they pull away rapidly. Something about the feeling of the stove seems to urge that one remove their hand from the stove—immediately!!
Indeed, anti-realists commonly accept that desires have reason giving force. However, if desires—a type of mental states—can have reason giving force, there seems no reason in principle that valenced qualia can’t have reason giving force.
Street (2008) provides a constructivist account of reasons—arguing we evolved to have a feeling of ‘to be doneness’. When one’s hand is on a hot stove, however, not only do they have a feeling of ‘to be avoidedness’ but that feeling seems to be fitting. Were they fully rational, that feeling wouldn’t go away. That’s because it’s a substantive property of some mental states—including the one experienced when one’s hand is on a hot stove—that they are simply worth avoiding.
Conclusion
Given the immense debate about moral realism, in this article, I have not been able to cover all of the relevant articles and arguments. However, I think I’ve summarized many of the main reasons to be a moral realist — some of which have, to the best of my knowledge, yet to be explored in the literature.
These arguments have been unapologetically pro hedonist. This is because I think that the challenges from the anti-realists to the hedonists are far weaker than they are for other moral realist views.
2 Non-physicalism about consciousness
0 A Brief Introduction
Why is there something rather than nothing? This question is quite difficult—perhaps even as difficult as the hard problem of consciousness. However, let’s consider some clearly terrible answers to the question.
There isn’t—something is an illusion.
Something is a weakly emergent property of nothing. When you have nothing for a little while, it combines to form something. Science will soon explain how nothing becomes something. Positing that there’s something that exists and is not reducible to nothing is like vitalism or phlogiston.
But, these answers are quite similar structurally and just as unsuccessful functionally as many “solutions,” to the hard problem of consciousness. In this blog post, I shall spell out why physicalist solutions to the hard problem fail—and why we need to be some type of dualist, idealist, or panpsychist.
Dispositionally, I’m an ardent physicalist. My intuitive, pre-theoretic leanings are strongly physicalist. However, when confronted with a brutal gang of facts, I was forced to abandon my physicalist leanings. This article draws heavily on the arguments of Chalmers in the conscious mind—definitely worth checking out for those who have not yet read it.
Let’s begin by defining physicalism. The SEP writes
Physicalism is, in slogan form, the thesis that everything is physical.
1 Broad Considerations
“Consciousness is a biological phenomena,”
—John Searle, being wrong.
So why do I think that physical stuff cannot even in principle explain consciousness. Well, there are two closely related higher order considerations, and then some more specific arguments.
The first broad consideration which explains why consciousness resists physicalist reduction is that physics explains things in terms of structure and function, as Chalmers notes. Physics gives equations to describe what things do and what they’re composed of. However, this cannot in principle explain what it’s like to eat a strawberry, see the color red, be in love. When we look at an atom, we have no way of verifying whether or not it is conscious, because we only observe its causal impacts.
So this is not analogous to Phlogiston or vitalism or anything else physicalists use as an analogy for consciousness. All of those are broadly explicable in terms of structure and function, and thus they don’t require any extra laws. Consciousness is different—it’s not in principle explainable in terms of structure and function.
A second related broad consideration which has been expressed eloquently by Kastrup is that material stuff can be exhaustively explained quantitatively. Through physics, we get a series of equations. To quote Kastrup
Chalmers basically said that there is nothing about physical parameters – the mass, charge, momentum, position, frequency or amplitude of the particles and fields in our brain – from which we can deduce the qualities of subjective experience. They will never tell us what it feels like to have a bellyache, or to fall in love, or to taste a strawberry. The domain of subjective experience and the world described to us by science are fundamentally distinct, because the one is quantitative and the other is qualitative.
2 Zombies
The most obvious way (although not the only way) to investigate the logical supervenience of consciousness is to consider the logical possibility of a zombie: someone or something physically identical to me (or to any other conscious being), but lacking conscious experiences altogether. 1 At the global level, we can consider the logical possibility of a zombie world: a world physically identical to ours, but in which there are no conscious experiences at all. In such a world, everybody is a zombie.
So let us consider my zombie twin. This creature is molecule for molecule identical to me, and identical in all the low-level properties postulated by a completed physics, but he lacks conscious experience entirely. (Some might prefer to call a zombie ″it,” but I use the personal pronoun; I have grown quite fond of my zombie twin.) To fix ideas, we can imagine that right now I am gazing out the window, experiencing some nice green sensations from seeing the trees outside, having pleasant taste experiences through munching on a chocolate bar, and feeling a dull aching sensation in my right shoulder.
—David Chalmers
An adequate model of physics will be able to describe what physically goes on in your brain. However, we can imagine a physical carbon copy of you that lacks consciousness. This shows consciousness is not purely physical, as we can’t imagine a carbon copy of H20 that isn’t water.
One confusion had by many is that the zombie argument presumes some type of epiphenomenalism, the notion that consciousness has no physical effect. This is false. If consciousness has a physical effect, the zombie would have some other law of physics fill in and play the functional role of consciousness. So if consciousness causes me to say things like “I’m conscious,” “I think therefore I am,” “consciousness poses a hard problem,” “Dan Dennett might be a zombie,” “consciousness can’t be explained reductively,” “Okay—Dennett is definitely a zombie,” etc—the zombie world would have some physically identical force fill in the functional role of consciousness and cause me to say all of those things.
Thus, the argument is as follows.
1 A being could be physically identical to me but could not be conscious
2 Two beings that are physically identical must have all physical properties in common
Therefore, consciousness is not a physical property.
There’s much more that can be said on the topic of zombies, however, to me it seems quite obvious that zombies are possible—those who deny their possibility seem conceptually confused to be. No doubt that’s how I seem to them. Yet I haven’t the time in this article to go into all of the accounts of the alleged impossibility of zombies, yet it’s worth noting that zombies are somewhat controversial.
3 Inverted Qualia
Even in making a conceivability argument against logical supervenience, it is not strictly necessary to establish the logical possibility of zombies or a zombie world. It suffices to establish the logical possibility of a world physically identical to ours in which the facts about conscious experience are merely different from the facts in our world, without conscious experience being absent entirely. As long as some positive fact about experience in our world does not hold in a physically identical world, then consciousness does not logically supervene.
It is therefore enough to note that one can coherently imagine a physically identical world in which conscious experiences are inverted, or (at the local level) imagine a being physically identical to me but with inverted conscious experiences. One might imagine, for example, that where I have a red experience, my inverted twin has a blue experience, and vice versa. Of course he will call his blue experiences ″red,” but that is irrelevant. What matters is that the experience he has of the things we both call “red”—blood, fire engines, and so on—is of the same kind as the experience I have of the things we both call “blue,” such as the sea and the sky.
—Chalmers.
If consciousness just is a physical phenomena, then it would be impossible to change conscious experiences without making a physical change. However, it seems eminently metaphysically possible that we could change consciousness but not make a physical change. Imagine a world physically identical to ours but in which one tomato that I see appears 1% redder than it does currently. If you think that world is possible, then consciousness is not purely physical.
Note, I’m perfectly willing to grant that based on the world as it currently exists, such a state would be impossible. There are, in my view, psychophysical laws that govern consciousness which make it so that consciousness can’t be different. However, we could make tweaks to those laws without having a physical effect, which shows consciousness is not physical.
4 Epistemic Asymmetry
Argument 3: From Epistemic Asymmetry As we saw earlier, consciousness is a surprising feature of the universe. Our grounds for belief in consciousness derive solely from our own experience of it. Even if we knew every last detail about the physics of the universe —the configuration, causation, and evolution among all the fields and particles in the spatiotemporal manifold —that information would not lead us to postulate the existence of conscious experience. My knowledge of consciousness in the first instance, comes from my own case, not from any external observation. It is my first-person experience of consciousness that forces the problem on me.
From all the low-level facts about physical configurations and causation, we can in principle derive all sorts of high-level facts about macroscopic systems, their organization, and the causation among them. One could determine all the facts about biological function, and about human behavior and the brain mechanisms by which it is caused. But nothing in this vast causal story would lead one who had not experienced it directly to believe that there should be any consciousness. The very idea would be unreasonable; almost mystical, perhaps.
It is true that the physical facts about the world might provide some indirect evidence for the existence of consciousness. For example, from these facts one could ascertain that there were a lot of organisms that claimed to be conscious, and said they had mysterious subjective experiences. Still, this evidence would be quite inconclusive, and it might be most natural to draw an eliminativist conclusion—that there was in fact no experience present in these creatures, just a lot of talk
If consciousness were a reductively explainable physical property, then we’d be able to deduce its existence from knowledge of the lower level facts. However, this is manifestly impossible in the case of consciousness. If you knew everything abut atoms, you’d be able to deduce the existence of fire and explain what it does. However, nothing about consciousness is evident from low level descriptions of physical systems.
Why do you think others are conscious? Well, the reason is because you know you’re conscious and others plausibly have similar features to the ones that make you conscious. However, this is not how we deduce that others can get sick. Rather, we directly observe others getting sick. Even if we were in perfect health, it would be reasonable to infer that others get sick. However, if you were not conscious, it would not be reasonable to infer others were conscious. This is because consciousness is not explainable by low level physical facts.
5 The Knowledge Argument
The Knowledge Argument
The most vivid argument against the logical supervenience of consciousness is suggested by Jackson (1982), following related arguments by Nagel (1974) and others. Imagine that we are living in an age of a completed neuroscience, where we know everything there is to know about the physical processes within our brain responsible for the generation of our behavior. Mary has been brought up in a black-and-white room and has never seen any colors except for black, white, and shades of gray. 7 She is nevertheless one of the world’s leading neuroscientists, specializing in the neurophysiology of color vision. She knows everything there is to know about the neural processes involved in visual information processing, about the physics of optical processes, and about the physical makeup of objects in the environment. But she does not know what it is like to see red. No amount of reasoning from the physical facts alone will give her this knowledge.
It follows that the facts about the subjective experience of color vision are not entailed by the physical facts. If they were, Mary could in principle come to know what it is like to see red on the basis of her knowledge of the physical facts. But she cannot. Perhaps Mary could come to know what it is like to see red by some indirect method, such as by manipulating her brain in the appropriate way. The point, however, is that the knowledge does not follow from the physical knowledge alone. Knowledge of all the physical facts will in principle allow Mary to derive all the facts about a system’s reactions, abilities, and cognitive capacities; but she will still be entirely in the dark about its experience of red.
—Guess who!
If consciousness were a reductively explainable physical property then knowing all of the facts about the brain would make it possible to know what it’s like to see red, despite being color blind. However, this is clearly impossible. No neuroscientific knowledge can communicate what it’s like to see red, for one who has never seen red. If Mary left the room and saw a red tomato, she’d learn something new about what it’s like to see red. Her curiosity would be satisfied by seeing the color red, if she had previously wondered what it was like to see red.
No amount of neurological knowledge could teach a deaf person what it’s like to hear Mozart or a blind person what it’s like to see the grand canyon. However, if consciousness were purely physical, this would be possible. If one knows all of the facts about bricks, they could know all relevant facts about brick walls. This is because a brick wall is an emergent property of bricks. If consciousness were merely physical, then much like full physical knowledge would teach you everything there is to be known about a tumor, supernova, or ocean, the same would be true of consciousness. However, this is manifestly impossible.
6 From The Absence Of Analysis
If proponents of reductive explanation are to have any hope of defeating the arguments above, they will have to give us some idea of how the existence of consciousness might be entailed by physical facts. While it is not fair to expect all the details, one at least needs an account of how such an entailment might possibly go. But any attempt to demonstrate such an entailment is doomed to failure. For consciousness to be entailed by a set of physical facts, one would need some kind of analysis of the notion of consciousness—the kind of analysis whose satisfaction physical facts could imply—and there is no such analysis to be had.
The only analysis of consciousness that seems even remotely tenable for these purposes is a functional analysis. Upon such an analysis, it would be seen that all there is to the notion of something’s being conscious is that it should play a certain functional role. For example, one might say that all there is to a state’s being conscious is that it be verbally reportable, or that it be the result of certain kinds of perceptual discrimination, or that it make information available to later processes in a certain way, or whatever. But on the face of it, these fail miserably as analyses. They simply miss what it means to be a conscious experience. Although conscious states may play various causal roles, they are not defined by their causal roles. Rather, what makes them conscious is that they have a certain phenomenal feel, and this feel is not something that can be functionally defined away.
—Greg, just kidding, Chalmers obviously.
When we consider facts about a physical system, none of them make it obvious why those things would make it conscious. Consider, for example, the integrated information theory, which says that when one system processes a variety of different types of information, it becomes conscious, with its consciousness proportional to the amount of integrated information. When information is integrated, nothing about that physical state obviously produces consciousness. It seems like there’s a further question—we know a system has integrated information, but that doesn’t settle whether it’s conscious.
Consciousness is not just integrated information. It seems imminently possible to imagine a non conscious system that integrates information. When we identify the neural correlates of consciousness, it’s never obvious why those things would be conscious. We can understand why H20 is water, but no explanation of why the neural correlates of consciousness are consciousness.
7 Disembodied Minds
If consciousness were just a physical phenomena, then disembodied minds would be metaphysically impossible. Because heat just is the rapid movements of particles, disembodied heat is impossible. To have heat, one needs particles to move rapidly.
It would make no sense to talk about a non-physical tortoise, box, or pancreas, because these are physical phenomena. However, disembodied minds—minds without bodies—seem metaphysically possible. We could imagine mental functions going on, even in the absence of a body. This shows that consciousness isn’t a purely physical property—it could exist in the absence of physical things.
8 Some Concluding Thoughts On Why This Isn’t Vitalism
Vitalism is the notion that living organisms have some fundamental life causing non-physical substance—“Élan vital.” Many have given analogies between non-physicalism about consciousness and vitalism, as they both posit a non material thing. However, it’s worth noting that none of the arguments above can apply to vitalism.
Life just is about structure and function and can be described quantitatively—so it’s not susceptible to the first argument. A L zombie, physically identical to an alive thing but that isn’t alive, is obviously impossible. It is possible to use low level phenomena to explain life, unlike for consciousness. There’s no analogy for the inverted qualia argument. Knowing all the physical facts about a physical system would let you know whether it’s alive and all the facts about its life, there is an account of how cells replicate and comprise life, and disembodied life is obviously impossible.
The properties that were appealed to for vitalism were non-physical properties, but ones that we now know don’t exist. There’s nothing it is to be alive over and above the physical facts relating to cell replication, growth, and the other things required for life. Thus, the correct view about vitalism was illusionism—the properties being posited that Elan Vital explained weren’t real. But we know consciousness is real! It’s the most certainly known natural phenomena—we can be more certain that we’re conscious than we can be of anything else.
Abandoning physicalism isn’t abandoning an answer to the problems of consciousness—it merely recognizes the reality of what form the answer must take. Non physicalist theories are testable and make predictions which can be subsequently verified.
Sometimes, the correct answers are surprising and run afoul of our heuristics. Generally people worrying about new technology are wrong, but not when it comes to AI alignment. Usually, Parfit is right, but not when it comes to the repugnant conclusion. Preachy vegans are irritating, but they’re right. Reductionism is enticing—it would be so nice if consciousness were just some physical phenomena, but there are knockdown arguments against such a view. We mustn’t be held captive to reductionist dogma, in the face of overwhelming evidence.
Eliezer is provably wrong about zombies
I enjoy much of what Eliezer Yudkowsky says. He’s been a large part of raising worries about AI alignment, writes tons of interesting less wrong stuff, wrote the epic HPMOR, and has shaped my thinking in many ways. However, Yudkowsky is, as the title hints at, wrong about zombies.
A zombie is a being physically identical to a conscious being in every way, minus the consciousness. The important thing to note is that the zombie would, if consciousness is causally efficacious, have other things that fill in the causal roles of the person.
Yudkowsky writes
Your “zombie”, in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.
It is furthermore claimed that if zombies are “possible” (a term over which battles are still being fought), then, purely from our knowledge of this “possibility”, we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is “epiphenomenalism”.
Note, when we use possibility here, we’re describing metaphysical possibility, not physical possibility. So the question is whether there is a possible world that is atom for atom identical to this world but that lacks consciousness. All of the things done by consciousness would be done by other laws that are functionally identical to consciousness in this world, but that don’t contain any experiences.
Eliezer’s claim that this view is epiphenomenalism is false. Epiphenomenalism says consciousness doesn’t cause anything. One can hold to epiphenomenalism and zombies, because the zombie world would have something else do what your consciousness does in this world.
(For those unfamiliar with zombies, I emphasize that this is not a strawman. See, for example, the SEP entry on Zombies. The “possibility” of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)
But it is a strawman!! The zombie argument doesn’t entail epiphenomenalism. It’s often made by interactionist dualists, panpsychists, and idealists. It’s frustrating that Eliezer strawman’s the arguments while specifically talking about not straw manning it. I’m not suggesting bad faith here, it’s just a bit frustrating.
When you open a refrigerator and find that the orange juice is gone, you think “Darn, I’m out of orange juice.” The sound of these words is probably represented in your auditory cortex, as though you’d heard someone else say it. (Why do I think this? Because native Chinese speakers can remember longer digit sequences than English-speakers. Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous “seven plus or minus two” for English speakers. There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)
Let’s suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies. Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about). It’s not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone’s auditory cortex and read out their internal narrative. (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)
So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes “Darn, I’m out of orange juice”. On this point, epiphenomalists would willingly agree.
But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing. The internal narrative is spoken, but unheard. You are not the one who speaks your thoughts, you are the one who hears them.
If we look inside the brain, what we see happening involves the flow of electric signals from your brain to the muscles in your arm, resulting in the refrigerator opening. The point of the zombie argument is that you could imagine a world where all of that goes on in exactly the same way—it looks precisely the same from the outside in terms of the movement of all of the atoms—but you are not conscious when it goes on.
I’m not an epiphenomenalist (my credence in it is around 10%), but the epiphenomenalists can give an explanation of this. If consciousness is just what it feels like for the brain to do things, then it feels like you’re the cause of it, but really your consciousness just is what it feels like for the brain to do things.
The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just “possible in theory”, or “imaginable”, or something along those lines—then consciousness must be extra-physical, something over and above mere atoms. Why? Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.
Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.
This is false. When Chalmers is defining views about philosophy of mind, he writes
10 Type-E Dualism
Type-E dualism holds that phenomenal properties are ontologically distinct from physical properties, and that the phenomenal has no effect on the physical.[*] This is the view usually known as epiphenomenalism (hence type-E): physical states cause phenomenal states, but not vice versa. On this view, psychophysical laws run in one direction only, from physical to phenomenal. The view is naturally combined with the view that the physical realm is causally closed: this further claim is not essential to type-E dualism, but it provides much of the motivation for the view.
Obviously epiphenomenalism is different from Descartes’ dualism. Descartes was a substance dualist and interactionist. These extra views aren’t required for dualism. Zombieism, as Eliezer calls it, can be dualist or panpsychist—it just has to reject physicalism.
Something will seem possible—will seem “conceptually possible” or “imaginable”—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) …), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.
So just because you don’t see a contradiction in the Zombie World at first glance, it doesn’t mean that no contradiction is there. It’s like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility (“I don’t see a problem”) to logical possibility in the full technical sense, is a very great leap. It’s easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it’s logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.
Just because you don’t see a contradiction yet, is no guarantee that you won’t see a contradiction in another 30 seconds. “All odd numbers are prime. Proof: 3 is prime, 5 is prime, 7 is prime...”
This is of course true. The question for zombies isn’t just whether we could imagine them—I could imagine fermat’s last theorem being false, but it isn’t—but whether it’s metaphysically possible that they exist.
So let us ponder the Zombie Argument a little longer: Can we think of a counterexample to the assertion “Consciousness has no third-party-detectable causal impact on the world”?
If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of “I am aware” and “My awareness is separate from my thoughts” and “I am not the one who speaks my thoughts, but the one who hears them” and “My stream of consciousness is not my consciousness” and “It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior.”
You can even say these sentences out loud, as you meditate. In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.
This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.
Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift. You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction. You can’t make the black box produce gold coins or answer questions. So you conclude that the black box is causally inactive: “For all X, the black box doesn’t do X.” The black box is an effect, but not a cause; epiphenomenal; without causal potency. In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—”Does the black box turn lead to gold? No. Does the black box boil water? No.”
But you can see the black box; it absorbs light, and weighs heavy in your hand. This, too, is part of the dance of causality. If the black box were wholly outside the causal universe, you couldn’t see it; you would have no way to know it existed; you could not say, “Thanks for the black box.” You didn’t think of this counterexample, when you formulated the general rule: “All X: Black box doesn’t do X”. But it was there all along.
(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven’t the slightest clue that it’s there in your living room. That was their joke.)
If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think “I am aware that I am aware”—and say out loud, “I am aware that I am aware”—then your consciousness is not without effect on your internal narrative, or your moving lips. You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.
I have not seen the above argument written out that particular way—”the listener caught in the act of listening”—though it may well have been said before.
I think this is a pretty good argument against epiphenomenalism. However, this does nothing to show that consciousness is physical, and it doesn’t answer the zombie argument. Consider an analogy—imagine that the cause of gravity is a god willing gravity to be so, one who is defined as being non-physical. Even though gravity is caused by the non-physical mind, we could imagine a world that’s physically identical, where gravity is caused by something else, other than the non-physical mind. Consciousness is the same.
But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World’s philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.
At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.
Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world. You can argue clever reasons why this is not so, but you have to be clever.
You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like “There is a mysterious listener within me,” because the mysterious listener would be gone. It is usually right after you focus your awareness on your awareness, that your internal narrative says “I am aware of my awareness”, which suggests that if the first event never happened again, neither would the second. You can argue clever reasons why this is not so, but you have to be clever.
But again, you could have some functional analogue that does the same physical thing that your consciousness does. Any physical affect that consciousness has on the world could be in theory caused by something else. If consciousness has an affect on the physical world, it’s no coincidence that a copy of consciousness would have to be hyper specific and cause you to talk about consciousness in exactly the same way.
One strange thing you might postulate is that there’s a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.
A Zombie Master doesn’t seem impossible. Human beings often don’t sound all that coherent when talking about consciousness. It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar. Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today’s models but not self-modifying; and get back discourse about “consciousness” that sounded as sensible as most humans, which is to say, not very.
But this speech about “consciousness” would not be spontaneous. It would not be produced within the AI. It would be a recorded imitation of someone else talking. That is just a holodeck, with a central AI writing the speech of the non-player characters. This is not what the Zombie World is about.
By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are “bridging laws” that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.
This is not true. As this paper notes
[A]n interactionist dualist can accept the possibility of zombies, by accepting the possibility of physically identical worlds in which physical causal gaps go unfilled, or are filled by something other than mental processes. The first possibility would have many unexplained physical events, but there is nothing metaphysically impossible about unexplained physical events. Also: a Russellian “panprotopsychist”, who holds that consciousness is constituted by the unknown intrinsic categorical bases of microphysical dispositions, can accept the possibility of zombies by accepting the possibility of worlds in which the microphysical dispositions have a different categorical basis, or none at all. (Chalmers 2004:184)
Chalmers himself notes in a comment below the original post
It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they’re really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect. I endorse Z, but I don’t endorse E: see my discussion in “Consciousness and its Place in Nature”, especially the discussion of interactionism (type-D dualism) and Russellian monism (type-F monism). I think that the correct conclusion of zombie-style arguments is the disjunction of the type-D, type-E, and type-F views, and I certainly don’t favor the type-E view (epiphenomenalism) over the others. Unlike you, I don’t think there are any watertight arguments against it, but if you’re right that there are, then that just means that the conclusion of the argument should be narrowed to the other two views. Of course there’s a lot more to be said about these issues, and the project of finding good arguments against Z is a worthwhile one, but I think that such an argument requires more than you’ve given us here.
Thus, even if consciousness causes things, that’s just a description of what consciousness does. One could imagine a world where all the atoms move in the same way, as if they’re prompted by consciousness, but they aren’t caused by anything conscious. A subjective experience may do something causally, but you could imagine a physical law on interactionism that does exactly the same things consciousness does. Next Eliezer says
The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie’s lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)
Interactionism doesn’t hold that consciousness is not experimentally detectable—that’s not a necessary entailment of dualism. The zombie world on interactionism wouldn’t need an extra zombie master. Suppose that the psychophysical law in this world is that when you get a bunch of neurons together they become conscious and then their desires exert some force. Well, the zombie world would have the same forces exerted, just minus the mental state of desires.
Why would anyone bite a bullet that large? Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?
The reason is because consciousness is not merely causal. It does cause things, but there’s something it’s like to see red, over and above what it causes. Thus, it’s possible that you could take away that other stuff in theory, and still have a causal isomorph. The reason people postulate that consciousness is causally inert is because
A) there are problems incorporating its causal role into physics.
B) All one has to posit is that when a person has a particular desire, that corresponds with the physical effect. Epiphenomenalists argue that the simplest consciousness laws involve the physical state that is about to raise your arm causing consciousness, rather than the other way around.
Zombie-ists are property dualists—they don’t believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.
“Beyond the physical”? What does that mean? It means the extra properties are there, but they don’t influence the motion of the atoms, like the properties of electrical charge or mass. The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.
One can be an interactionist property dualist. Property dualism just requires saying that consciousness is a property of matter, not its own separate substance.
Once you’ve postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the “mysterious redness of red”?
Isn’t Descartes taking the simpler approach, here? The strictly simpler approach?
Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?
Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?
I am not endorsing Descartes’s view. But at least I can understand where Descartes is coming from. Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness. Fine.
I lean towards interactionist dualism, so I’m in agreement with Eliezer here. However, the claim that dualism is motivated by finding something that seems mysterious and then just positing mysterious stuff is totally wrong. Dualists don’t just give up on explanations—there are lots of ways that specific dualists have experimentally tested their theories.
There are lots of reasons to posit dualism of some sort, which I lay out here. The fundamental reason is that the laws of physics explain physics in terms of structure and function—yet none of that is able to explain the subjective experience of seeing red, for example. Subjective experience is neither structural nor functional, so the physics based account that explains it in terms of structure and function is wholly inadequate.
Chalmers critiques substance dualism on the grounds that it’s hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?
When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?
This is not accurate. For one, Chalmers is now pretty undecided between different versions of non-physicalism. Chalmers objects to substance dualism based on it violating causal closure of the physical, having trouble explaining how consciousness would interact, and plausibly being ruled out by physics.
Overall, I quite like Eliezer, as I said at the outset. However, it’s frustrating that when it comes to consciousness, he just seems very lost. This is particularly a problem given that consciousness is literally the most important thing in the universe—the only important thing in the universe. So it’s really, really, really important not to get things wrong, when it comes to consciousness.
Eliezer at one point says
That-which-we-name “consciousness” happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
Yet within physics, the last 3000 times, we haven’t just posited the same old laws. Newton discovered brand new laws, so did Einstein. Consciousness is not more fundamentally mysterious—there are just some type of fundamental psychophysical laws that result in consciousness, which have a causal effect on the world.
Trying to explain it with the same old stuff, when we have lots of knock-down arguments against the ability of the old stuff to explain it—that’s an appeal to magic. Eliezer’s reductive account involves positing that when you have some physical things, they just produce experience, despite our inability to either
A) Understand how physics could go beyond explaining structure and function.
B) Provide any account of how brain stuff generates consciousness.
C) Provide a physical description of any type of conscious state.
All of the other accounts of successful reduction have involved explaining the behavior of things at a higher level, by appealing to lower level facts. But this just won’t work for consciousness! Consciousness isn’t about behavior. When we ask whether AI is conscious, we don’t care whether they say verbally that they’re conscious. What we care about is whether the ineffable what it’s like stuff is present in the AI.
This is a seriously important mistake for effective altruists not to make. We must not with away and ignore the fundamental difficulty of the hardest problem in the universe. Saying “it just emerges,” is not a good solution. And yet I fear that’s the solution of many of my fellow effective altruists and rationalists—a mistake that could be very costly.
It seems you are not even building and fighting a strawman, you are fighting straw-windmills. You are so sure you are right about contentious topics, it’s off-putting.
Here is what a reasonable take on moral relativism might look like, an example from Sean Carroll https://www.preposterousuniverse.com/podcast/2022/12/05/ama-december-2022/ :
Consider learning from the masters.
That should probably read Humean.
And vice versa.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature. If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism.is that relativists can’t criticise… The actual point is that it is in vain … No relativist has. motivation to.change their mind.
You use the logic “A->B, B is unpleasant, hence A is false”.
No, I use the logic “thing needs additional component to work”. My approach is based on replacing is-true with is-useful.
I don’t have much in the way of any general opposition to Carrol’s remarks in the quote you provide but I do think Carroll characterizes relativism in a way that may be inaccurate, or at least incomplete. According to Carroll:
This may be true of some forms of moral relativism, but not all or the most defensible forms. Nothing about moral relativism prohibits the relativist from judging the moral actions of other people, or the cultural standards of other cultures, nor does relativism entail that they have no right or leverage to criticize those cultures. After all, the latter appear to be moral or at least normative claims themselves, and if you’re a relativist, you could reasonably ask: no right or leverage relative to what moral standard? The standards of the people or cultures I am judging, or relative to my own standards? A relativist does not have to think they can only judge people according to those people’s standards; they can endorse appraiser relativism, and think that they can judge others relative to their own standards.
One shortcoming in descriptions of moral relativism is that they frequently fail to distinguish between agent and appraiser relativism. Agent relativism holds that moral standards are true or false relative to the agent performing the action (or that agent’s culture). Appraiser relativism holds that moral standards are true or false relative to the moral framework of the agent (or the culture of the agent) judging the action in question. Here’s how the SEP distinguishes them:
Many common depictions of relativism focus on agent relativism. And this seems consistent with Carrol’s description. Yet I suspect this emphasis stems from a tendency to characterize relativism in ways that seem to have more straightforward normative implications: people often reject relativism because it purportedly encourages or mandates indifference towards people with different moral standards. But this would only be true of at best some forms of moral relativism. Incidentally, Gowans, the author of the SEP article on moral relativism, says:
I don’t know if this is true. But if it is, there’s something odd about depictions of relativism that seem closer to agent relativism than appraiser relativism. Appraiser relativism can get you something pretty close to the kind of constructivism Carroll describes, so I don’t think the relativism/constructivism distinction was necessary here. Relativism itself has the resources to do what Carroll proposes.
Again, that isn’t the objection. The objection is that such judgements achieve nothing.
My goal with the remark is to accurately characterize relativism. Not to defend it. If someone wants to object to relativism on the grounds that it doesn’t achieve anything, that’s orthogonal to the point I was making. I’m not really sure I understand the objection, though. When you say the judgments achieve nothing, can you clarify what you mean? If I judge others as doing something wrong, I’m not sure why it would be an objection to tell me that this doesn’t achieve anything. Would it avoid the objection by achieving something in particular? If so, what?
Ethics is supposed to do things, not be an ivory tower approach.
The symmetry problem, the fact that every relativist can equally criticise every other, is a bug not a feature.
Alice: stop that, it’s wrong-for-me! Bob: It’s Ok by me, so I’m going to carry on.
Etc,ad infinitum.
If there is no reasoned way to resolve a dispute, force will take the place of reason. In fact, it’s a straw man to say that the realist objection to relativism is that relativists can’t criticise… The actual point is that it is in vain … No relativist has a motivation to.change their mind.
This is a common feature of moral disputes even when no relativism is involved. Compare:
“You shouldn’t do that.” “It’s fine according to my values, and that’s all that matters.”
“You shouldn’t do that.” “Yes I should; you’re wrong about morality.”
If there’s an important difference between these that makes 1 problematic and 2 not, I’m failing to see it. In practice, the way you convince someone to change their behaviour is some combination of (a) appealing to moral ideas they do agree with you about and (b) influencing them not-explicitly-rationally to change their values (e.g., by exposing them to people they currently condemn so that they can see for themselves that they’re decent human beings). And both of these work equally well (or badly) whether either or both of the parties are moral realists.
1 is necessarily subjective, and 2 isnt.
Maybe in normie-land, but in philosophy you can go up meta levels.
Yes, 1 is necessarily subjective and 2 isn’t. But since what you were trying to do is to show that subjectivism is bad, it’s not really on to take “it’s subjective!” as a criticism.
Philosophers and other intellectual sorts may indeed be more open than normies to rational persuasion in matters of ethics. (So probably more of (a) and less of (b).) They’re also not much given to resolving their disagreements by brute force, realist or not, relativist or not, so your concern that “force will take the place of reason” doesn’t seem very applicable to them. Is there any evidence that philosophers who are moral realists are more readily persuaded to change their ethical positions than philosophers who are moral nonrealists? For what it’s worth, my intuition expects not.
I’ve already given the argument against subjectivism.
Your argument was that for subjectivists “such judgements achieve nothing” on the grounds that “every relativist can equally criticise every another” because when criticized someone can say “It’s OK by me, so I’m going to carry on”, so that “force will take the place of reason” since “no relativist has a motivation to change their mind”.
I objected that this argument actually applies just as much to moral realists, the only difference being that the response changes from “It’s OK by me” to “It’s OK objectively”. No one is going to be convinced just by being told “X is wrong”; you have to offer some sort of argument starting from premises they share, and that’s exactly as true whether the people involved are realists or not, subjectivists or not, relativists or not. (Or, in either case, you can try to persuade by not-explicitly-rational means like just showing them the consequences of their alleged principles, or making them personally acquainted with people they are inclined to condemn, or whatever; this, too, works or fails just the same whether anyone involved is objectivist or subjectivist.)
When I made this objection, your reply was that “It’s OK by me” is “necessarily subjective” and “It’s OK objectively” isn’t. But if your argument against subjectivism depends on it being bad for something to be subjective then it is a circular argument.
Maybe that’s not what you meant. Maybe you were just doubling down on the claim that being “necessarily subjective” means there’s no hope of convincing anyone to change their moral judgements. But that’s exactly the thing I’m disagreeing with, and you’re not offering any counterargument by merely reiterating the claim I’m disagreeing with.
Obviously they are not, and that was not my argument.
I know.
My argument was:-
Yeah, that was your argument originally. But when I explained why I didn’t buy it you switched to “1 is necessarily subjective, and 2 isn’t” as if being subjective is known to be a fatal problem—but the question at issue is precisely whether being subjective is a problem or not!
Anyway: Anyone can equally criticize anyone, relativist or not, subjectivist or not, realist or not. Can you give some actual, reasonably concrete examples of moral disagreements in which moral nonrealism makes useful discussion impossible or pointless or something, and where in an equivalent scenario involving moral realists progress would be possible?
If I try to imagine such an example, the sort of thing I come up with goes like this. X and Y are moral nonrealists. X is torturing kittens. Y says “Stop that! It’s wrong!” X says “Not according to my values.” And then, if I understand you aright, Y is supposed to give up in despair because “every relativist can equally criticise every other” or something. But in practice, (1) Y need not give up, because maybe there are things in X’s values that Y thinks actually lead to the conclusion that one shouldn’t torture kittens, and (2) in a parallel scenario involving moral realists, the only difference is that X just says “No it isn’t”, and if Y wants not to give up here then they have to do the same as in the nonrealist scenario: find things X agrees with from which one can get to “don’t torture kittens”. And all the arguments are just the same in the two cases, except that in one Y has to be explicit about where they’re explicitly appealing to some potentially controversial matter of values. This is, it seems to me, not a disadvantage. (Those controversial matters are just as controversial for moral realists.)
Perhaps this isn’t the kind of scenario you have in mind. Or perhaps there’s some specific kind of argument you think realist-Y can make that might actually convince realist-X, that doesn’t have a counterpart in the nonrealist version of the scenario. If so, I’m all ears: show me the details!
I can think of one kind of scenario where progress is easier for realists. Kinda. Suppose X and Y are “the same kind” of moral realist: e.g., they are both divine command theorists and they belong to the same religion, or they are both hedonistic act-utilitarians, or something. In this case, they should be able to reduce their argument about torturing kittens to a more straightforwardly factual argument about what their scriptures say or what gives who how much pleasure. But this isn’t really about realism versus nonrealism. If we imagine the nearest nonrealist equivalents of these guys, then we find e.g. that X and Y both say “What I choose to value is maximizing the net pleasure minus pain in the world”—and then, just as if they were realists, X and Y can in principle resolve their moral disagreement by arguing about matters of nonmoral fact. And if we let X and Y remain realists, but have them be “of different kinds”—maybe X is a divine command theorist and Y is a utilitarian—then they can be as utterly stuck as any nonrealists could be. Y says: but look, torturing kittens produces all this suffering! X says: so what? suffering has nothing to do with value; the gods have commanded that I torture kittens. And the difficulty they have in making progress from there is exactly the same sort of difficulty as their nonrealist equivalents would have.
(I remark that “It would be awful if X were true, therefore X is false” is not a valid form of argument, so even if you are correct about moral nonrealism making it impossible or futile to argue about morality that wouldn’t be any reason to disbelieve moral realism. But I don’t think you are in fact correct about it.)
Only in the ultimate clown universe where there are no facts or rules.
But if those things are subjective, the same problem re-applies.
Any realist argument that could do that. So long as there is such a thing. I think your real objection is that there are no good realist arguments. But you can’t be completely sure of that. If there is a 1% chance of a succesfull realist argument , then rational. debaters who want to converge on the truth should take that chance , rather than blocking it off by assuming subjectvism.
If you assume subjectivism , you are guaranteed not to get onto a realistic argument. If you assume realism , there is a possibility, but not a guarantee, of getting onto a realistic solution.
It’s entirely valid if you are constructing something. Bridges that fall down are awful, so don’t construct them that way.
I think that when you say “if those things are subjective, the same problem re-applies” you are either arguing in a circle, or claiming something that’s just false.
Suppose X is a moral nonrealist (but not a nihilist: he does have moral values, he just doesn’t think they’re built into the structure of the universe somehow), and he’s doing something that actually isn’t compatible with his moral values but he hasn’t noticed. Crudely simple toy example for clarity: he’s torturing kittens because he’s a utilitarian and enjoys torturing kittens, but he somehow hasn’t considered the kittens’ suffering at all in his moral reckoning. Y (who, let’s suppose, is also a moral nonrealist, though it doesn’t particularly matter) points out that the kittens are suffering terribly. X thinks about it for a while and agrees that indeed his values say he shouldn’t torture kittens, and reluctantly stops doing it.
This seems to me a perfectly satisfactory way for things to go, and in particular it is no less satisfactory than if X is a moral realist who believes that hedonistic utilitarianism is an objective truth and stops torturing kittens because Y convinces him that the objective truth of hedonistic utilitarianism implies the objective truth that one shouldn’t torture kittens, rather than “merely” that his own acceptance of hedonistic utilitarianism implies that he shouldn’t torture kittens.
“Oh, but instead of being convinced X could just say: meh, maybe you’re right but who cares? And then Y will have no good arguments.” Sure. But that’s an argument not against moral nonrealism but against moral nihilism: against not actually having any moral values of any sort at all.
“Oh, sure, X may be convinced, but that doesn’t count because it wasn’t a realist argument. Only realist arguments count.” Well, then your argument is perfectly circular: nonrealism is bad because nonrealists can’t make realist arguments. And, sure, I will gladly concede that if you take it as axiomatic that nonrealism is bad then you can conclude that nonrealism is bad, but so what?
No, my real objection is not that there are no good realist arguments. I’m not sure quite what you mean by that phrase, though.
If you mean arguments that start from only nonmoral premises and deduce moral truths then as it happens I don’t believe there are any; if there are then indeed moral realism is correct; but, also, if there are then they should have as much force for an intelligent and openminded nonrealist (who will, on understanding the arguments, stop being a nonrealist) as for a realist.
If you mean arguments that assume realism but not anything more specific then I rather doubt that that assumption buys you anything, though I’m willing to be shown the error of my ways. At any rate, I can’t see how that assumption is ever going to be any use in, say, arguing that X shouldn’t be torturing kittens.
If you mean arguments that assume some specific sort of realism (e.g., that every moral claim in the New Testament is true, or that the best thing to do is whatever gives the greatest expected excess of pleasure over pain) then (1) these will have no more force for a realist who doesn’t accept that particular kind of realism than for a nonrealist and (2) they will have as much force for a nonrealist who embraces the same moral system (not very common for divine-command theories, I guess, but there are definitely nonrealist utilitarians).
Again: I would like to see a concrete example of how this is supposed to work. You say “any realist argument” but it seems to me that that’s obviously wrong for the reason I’ve already given above: “you shouldn’t torture kittens because hedonistic utilitarianism is objectively right and torturing kittens produces net excess suffering” is a realist argument, but it is exactly paralleled by “you shouldn’t torture kittens because you are a hedonistic utilitarian, and torturing kittens produces net excess suffering” which is a perfectly respectable argument to make to a nonrealist hedonistic utilitarian.
Of course I agree that I can’t be completely sure that there are no good realist arguments (whatever exactly you mean by that), or indeed of anything else. If a genuinely strong argument for moral realism comes along, I hope I’ll see its merits and be convinced. I’m not sure what I’ve said to make you think otherwise.
It seems to me that your last paragraph amounts to a wholehearted embrace of moral nonrealism. If moral realism versus nonrealism is something we are constructing, something we could choose to be one way or the other according to what gives the better outcomes—why, then, in fact moral realism is false. (Because if it is true, then we don’t have the freedom to choose to believe something else in pursuit of better outcomes, at least not if we first and foremost want our beliefs to be true rather than false.)
I sense that there may have been a bit of a miscommunication. I don’t think that constructivism per se is crazy—I think it’s wrong, but it’s held by smart respectable people. It’s cultural relativism that’s held by no-one reasonable—the idea that, if society approves of vicious torture, it’s okay to torture people is crazy. This is one reason why there are virtually no contemporary defenders of cultural relativism. Also, I’m not so sure that I’m right—I’m 85% confident in moral realism and 70% confident in non-physicalism!
Moral relativism does not necessarily entail that if society approves of torture, then torture is “okay.” It only entails that it’s okay relative to that culture’s moral standards. But it does not follow that other individuals or cultures must also think it’s okay. They can think it’s not okay.
Relativism holds that moral claims are true or false relative to the standards of individuals or groups. So a claim like “torture is not wrong,” would mean something like “torture is not inconsistent with our culture’s moral standards.” If it isn’t inconsistent with a culture’s moral standards, the statement would be trivially true. Furthermore, an appraiser relativist does not have to tolerate another individual or culture with different moral standards acting in accordance with those moral standards. At best, only certain forms of agent relativism which hold that an action is morally right or wrong relative to the standards of the agent performing an act (or that agent’s culture). As Gowans notes in the SEP entry on agent and appraiser relativism:
”[...] that to which truth or justification is relative may be the persons making the moral judgments or the persons about whom the judgments are made. These are sometimes called appraiser and agent relativism respectively. Appraiser relativism suggests that we do or should make moral judgments on the basis of our own standards, while agent relativism implies that the relevant standards are those of the persons we are judging (of course, in some cases these may coincide). Appraiser relativism is the more common position, and it will usually be assumed in the discussion that follows.”
Are you rejecting agent relativism, appraiser relativism, or both with your example of torture?
As far as most philosophers not being relativists: this isn’t to say you’re mistaken (since that’s also my impression) but what are you basing that conclusion off of?
I agree relativism doesn’t entail that—cultural relativism does, however. Cultural relativism holds that right means approved of by my culture. This applies to both appraiser and agent relativism—as long as someone thinks something is right just because it’s supported by society, it will have a similar reductio.
What’s the reductio, exactly?
Ethics teachers report that their classes consist almost entirely of relativists, and they have to start the course by putting a preliminary case for realism , just to get the students to realise there is more than one option.
Yes, and, in addition to that, the best current studies on how nonphilosophers think about these issues find that across a variety of paradigms, respondents in the US tended to favor antirealism at a ratio of about 3:1, with most endorsing some type of relativism. See Pölzler and Wright (2020). In other words, when given the option to endorse a variety of metaethical positions, about 75% of the respondents in this study favored some type of antirealiasm.
Note that P&W’s studies relied on online samples from a population that is disproportionately nonreligious, and student samples, which are disproportionately more inclined towards relativism (see Beebe & Sackris, 2016), so they are probably not representative of the United States population as a whole.
References
Beebe, J. R., & Sackris, D. (2016). Moral objectivism across the lifespan. Philosophical Psychology, 29(6), 912-929.
Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.
The problem isn’t that he’s overly sure about “contentious topics.” These are easy questions that people should be sure about. The problem is that he’s sure in the wrong direction.
They are not easy questions, and if you think they are , you dont understand the subject. If a subject has five counterargument for every argument, as philosophy does, the less you know, the more any individual claim seems plausible.
Incidentally, I am unable to guess what you think the one true ethics is.
Can you clarify which questions you take to be easy? I’m not necessarily disagreeing. I’m trying to get clear on what you take to be easy questions, and what you take the answer to be.
On the question of morality, objective morality is not a coherent idea. When people say “X is morally good,” it can mean a few things:
Doing X will lead to human happiness
I want you to do X
Most people want you to do X
Creatures evolving under similar conditions as us will typically develop a preference for X
If you don’t do X, you’ll be made to regret it
etc...
But believers in objective morality will say that goodness means more than all of these. It quickly becomes clear that they want their own preferences to be some kind of cosmic law, but they can’t explain why that’s the case, or what it would even mean if it were.
On the question of consciousness, our subjective experiences are fully explained by physics.
The best argument for this is that our speech is fully explained by physics. Therefore physics explains why people say all of the things they say about consciousness. For example, it can explain why someone looks at a sunset and says, “This experience of color seems to be occurring on some non-physical movie screen.” If physics can give us a satisfying explanation for statements like that, it’s safe to say that it can dissolve any mysteries about consciousness.
I’m not trying to explain other peoples reports, I’m trying to explain my own experience.
Same here. Yet what I’ve found is that philosophers often make claims about other people’s experiences, but don’t bother to ask anyone or gather data on what other people report about their experiences. Hence why experimental philosophy is important.
Thanks for clarifying.
You’ll get no disagreement from me. I’m a proponent of the view that standard accounts of moral realism are typically either unintelligible (non-naturalist accounts usually, or any accounts that maintain that there are irreducibly normative facts, or categorical reasons, or external reasons, etc.), or trivial (naturalist realist accounts that reduce moral facts to descriptive claims that have normative authority).
Surprisingly, the claim that moral realism isn’t coherent is not popular in contemporary metaethics and I almost never see anyone arguing for it, aside from myself, so it’s nice to see someone make a similar claim.
Not really. Moral beliefs evolved as a consensus mechanism to improve fitness—if you didn’t believe your suffering is morally relevant, you were less likely to convince others to help alleviate your suffering, thus reducing your fitness. All of the examples I see given for things that can’t be explained with physical processes but can be explained by moral realism are just bad. I challenge you to give better ones.
I don’t agree. If physicalism is true, this is impossible. If consciousness operates on physical laws, then there is no “psychosocial law” to adjust. It’s physics all the way down. It’s like saying, “imagine a mathematically consistent world where math is the exact same, except 2+2=5″. Not possible.
The talk about access is actually the crux of moral realism. I highly disagree with the quote. If everything else in moral realism is convincing and sound, but access to moral truth is impossible, then the entire theory collapses. I could nod my head at everything else, but if you can’t explain how access works, I would immediately become very unconvinced. If your moral intuitions, theory, philosophy, ect. have no connection to moral truth, I don’t see any difference between your stance and moral anti-realism. Access is probably the biggest problem in moral realism, and I’ve never heard a satisfactory answer otherwise. I went and read the paper you linked. It doesn’t actually explain how we access moral truths. It literally says
This is a big deal. You can’t just build a framework, notice that it’s missing a massive hole, refuse to explain, and leave. If access can’t be explained, then nothing else is needed to refute moral realism. It’s self-refuting.
Re Bramble, there’s no reason one has to have the moral belief that pain is bad to organize coalitions trying to avoid pain, based merely on it being disliked.
Re physicalism—well, I think the analogy doesn’t go through for a few reasons. First, it’s not even clear what it means to imagine that 2+2=5 without changing the definitions of words. But also, if we just reflect on what consciousness is, it’s very clear that we could imagine every particle moving the same way but there not being consciousness. I agree you have to deny that if you’re a physicalist—that’s one good reason not to be a physicalist.
Re the challenge that no realist has addressed—Enoch addresses it in that article. He also says they haven’t addressed the challenge that way, but the responses they gave still apply—eg the Parfit response to evolutionary debunking.
Well-written, if wrong :P Thanks!
Reasoning like “well, we believe in an external reality because it seems plausible, and objective moral facts seem plausible, so we should believe in those too” is the sort of thing that sounds better the less you know about the epistemology of external reality. It really is a shame that more philosophers don’t know how Solomonoff induction works. No, it doesn’t get you off Neurath’s boat, but it sure as heck doesn’t look like intuitionism.
You really didn’t pass the ideological Turing test in “Classifying anti-realists.” Here, have some reading recommendations: 2-place and 1-place words. Probability is subjectively objective. Math is subjunctively obective. Morality as fixed computation.
Can you cash a belief in moral realism out into disagreements with us about predictions? For example, if we meet aliens, does moral realism make different predictions about their morality than reductionist evolutionary biology? If we build an AI that starts with no moral intuitions, do you expect it to stumble upon the correct moral facts and then accept them, such that if we ran a thousand slightly different copies of the same program, they would all converge?
If realism or quasi realism work better, for instance in preventing violent disputes about resource allocation, then societies are likely to converge on them. It’s easy to.show that realism is desireable, harder to show it is achievable.
Does it work? SIs can only reject non natural hypotheses if they can test them, and only test them if they can express them in code. Can they? Note that the programmes in an SI can’t even represent continua/uncountability.
In order to carry out solomonoff induction, we presumably need mathematics. And it’s very tricky to develop a mathematical realism which doesn’t use an epistemology also permitting moral realism. (What counts as intuitionism is very fraught, but on some understandings, mathematical realism is most plausibly reliant on an “intuitionism”) - see Justin Clark-Doane’s excellent book Morality and Mathematics for a discussion of this.
The typical mathematical realism I’ve encountered involves brazenly misunderstanding model theory. E.g. “Either PA is consistent or it’s not, but math can’t prove it because Godel’s theorem, so there are facts of the matter independent of proof, which must therefore be about real stuff.”
We can do math just fine with the much tamer model-theoretic sort of truth (one that says you can have a model where PA is consistent, and a model where it’s inconsistent, and they’re both okay). Being a realist about that sort of truth is relatively unobjectionable, but it probably doesn’t do anything fancy like supporting moral realism.
One can construct a Turing Machine which iterates over all possible PA proofs, halting if it ever finds an inconsistency. Given this, if you’re going to hold that there’s no objective fact of the matter about whether PA is consistent, you’ll also have to hold there’s no objective fact of the matter about whether this Turing Machine halts.
Which proofs are possible depends on your model of PA! In non-standard models, you can have proofs coded for by non-standard numbers.
More LW posts: https://www.lesswrong.com/s/SqFbMbtxGybdS2gRs
When we build a Turing machine that iterates over “all possible” proofs, we have to make choices about physical implementation that are more specific than PA.
When a mathematical theory says that multiple things are consistent, and you try it in the real world and only one thing happens, you should infer that trying it in the real world more precisely corresponds to some other, more specific mathematical structure where only one thing is consistent, not that only one thing was real in the original mathematics, and trying in the real world uncovered which one it was.
So you don’t think there’s a Turing Machine which enumerates all and only valid PA proofs?
For what proof encoded by only a non-standard number would you endorse the claim “this proof doesn’t objectively lack proof-hood”?
I’m saying by asking about the behavior of such a machine implemented in the real world, you are being more specific than PA. For which you should think about the properties of physics and what kinds of mathematics they can implement, not whether proofs in PA have “objective proof-hood.”
Gives me a good idea for a sci-fi story, though:
Suppose rather than building a Turing machine ourselves to check proofs, we explore our infinite universe and find such a Turing machine that appears to have been running for infinite timesteps already. We can tell for sure that it’s checking proofs of PA, but to our shock, it’s actually somewhere in the middle of checking proofs coded by some nonstandard sequence of numbers. We decide to build a space station to keep watch on it, to see if it halts.
So is it impossible for me to abstractly describe a Turing Machine, and then wonder whether it would halt, with that necessarily having a fact of the matter, all without resorting to physical instantiations?
The idea I’m trying to express is that “a proof using PA axioms is a valid proof using PA axioms if and only if it is enumerated by the following TM: [standard PA TM enumerator description]”.
My question is what’s an example of a PA proof you think is arguably valid but wouldn’t be enumerated?
A purely logical TM would be understood to enumerate different proofs depending on the model of the axioms you used to specify it. This is how there can be one model of PA where such a TM halts, and another model of PA where such a TM doesn’t halt. So your plan doesn’t do the work you seem to think it does.
Don’t think of this as “there is one actual thing, but it mysteriously has multiple behaviors.” Even though it’s really convenient to talk that way (I did it above just now), maybe try to think of it like when you pick some axioms, they don’t actually pick out a single thing (if they’re complicated enough), instead they’re like a name shared by multiple “different things” (models), which can behave differently.
Why? Can you endorse mathematical realism, but reject all forms of normative realism, including epistemic and moral realism?
Yes! The claim is that if you use intuitions to justify one but reject intuitions to justify the other, that will be inconsistent.
What’s the inconsistency? You could have an intuition that mathematical realism is true, and that moral realism isn’t.
Then you wouldn’t be rejecting intuitions to justify the other, as in omnizoid’s comment (you’d be using intuitions to reject the other). Also the prior comment uses the phrase “permitting moral realism”—I wouldn’t have taken this to imply REQUIRING moral realism, independent of intuitions.
If the claim is that it would be inconsistent to consider intuitions as a means of justification, but then reject them as a means of justification specifically with respect to moral realism, that would be inconsistent. But someone can endorse mathematical realism and not moral realism simply by finding the former and intuitive and not finding the latter intuitive. They could still acknowledge that intuitions could serve as a justification for moral realism if they had the intuition, but just lack the intuition.
Second, note that omnizoid originally said
I don’t see anything tricky about this. One can be a normative antirealist and reject both epistemic and moral realism, because both are forms of normative realism, but not reject mathematical realism, because it isn’t a form of normative realism. In other words, one can consistently reject all forms of normative realism but not reject all forms of descriptive realism without any inconsistency.
Agreed that a difference in intuitions provides a perfectly consistent way to deny one and not the other, I don’t think omnizoid would deny this.
On the second point—presumably there would need to be an account of why normativity should get different treatment epistemologically when compared with mathematics? Otherwise it would seem to be an unmotivated distinction to just hold “the epistemological standards for normativity are simply different from the mathematical standards, just because”. I don’t doubt you have an account of an important distinction, but I just think that account would be doing the work. The initial “tricky” claim would hold up to the extent that identifying a relevant distinction is or isn’t “tricky”.
Right, I think we’re on the same page. I would just add that I happen to not think there’s anything especially tricky about rejecting normative realism in particular. Though I suppose it would depend on what’s meant by “tricky.” There’s construals on which I suppose I would think that. I’d be interested in omnizoid elaborating on that.
Thanks for the reply and kind words!
This was obviously not the extent of my argument for phenomenal conservatism.
What was wrong with the classification of anti-realists? If one is a realist they think that there are mind independent moral facts. Thus, to deny this, one needs to think either moral claims aren’t truth apt, they’re all false, or they depend on attitudes. I’ve read Eliezer’s stuff about morality, FWIW. If you want my ideological turing test of at least one version of anti-realism, here it is https://benthams.substack.com/p/sounding-like-an-anti-realist
Yes—though the predictions won’t settle it. Some things we’d predict of aliens is that they’d appreciate pleasure if they can experience it, that some of them would be utilitarians, and we’d also predict greater moral convergence over time. In particular, we’d expect a coherent formula to be able to characterize the moral views that are supported by the most reasons. I think if none of those things ended up being true, my credence in realism would decrease to around 60%.
No they don’t. The standard claim that all antirealist positions are either relativism, error theory, or noncognitivism is false: it requires antirealist positions to include a semantic claim about the meaning of moral claims.
But an antirealist can both deny that there are stance-independent moral facts, and deny the philosophical presuppositions implicit in the claim that there is some kind of correct analysis of moral claims, such that moral claims are either truth apt, all false, or depend on attitudes. Also, an antirealist can endorse indeterminacy about the meaning of moral claims, and maintain that they aren’t determinately truth-apt, false, or dependent on attitudes. For an example, see:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
I agree with this—one can think some claims aren’t truth apt, others false, others dependent on attitudes. The claim is that collectively these have to cover all moral claims.
I’m explicitly denying that that covers all the possibilities. You can also endorse incoherentism or indeterminacy.
Also, when you say that the claims aren’t truth-apt, are you supposing that the claims themselves have a meaning, or that the person who made the claim means to communicate something with a given moral utterance?
Where would “Morality as fixed computation” fit in your typology? Or metaethical constructivism? Like, it’s fine to dunk on error theorists or relativists all you want, but it’s not real relevant to LW. Individual subjectivism is sort of closer, but I would have liked to see a typology that included things LWers might actually endorse.
As another example of something not fitting in your typology, consider the rules of baseball. We all agree baseball is socially constructed—it’s not trying to conform to some Platonic ideal, the rules could easily have been different, they arose through some social process, etc. And yet facts about baseball are also pretty solid—it’s not a matter of opinion whether it takes three strikes or four to get a batter out.
You might say that baseball is in fact culturally relativist. After all, society came up with the rules of baseball in the first place, and has agreed to change the rules of baseball before.
But suppose the Nazis had won the war, and in this alternate history they forced everyone to play baseball with a large ball filled with air, and there was no pitcher or batters, instead you gained points by getting the ball through a goal guarded by the opposing team, and you weren’t allowed to touch the ball with your arms. It should seem obvious that what is going on is not that the Nazis made it true that “in baseball you kick the ball with your feet.” All they did was outlaw baseball entirely, and force everyone to play
soccerfootball. When the alternate-reality Nazis say “Baseball is played with eleven players on a side,” they’re simply not talking about baseball. So is baseball non-cognitivist, because the Nazis’ statements aren’t actually about the thing they syntactically seem to be about? But again, when you talk about baseball, you’re capable of making perfectly good true or false statements.Probably some type of relativism.
Failing to address most of the issues. Theres nothing about whether everyone has the same computation , and there’s nothing about how to resolve conflicts, if they don’t. There’s also nothing about obligation or punishment....
https://www.lesswrong.com/posts/FnJPa8E9ZG5xiLLp5/morality-as-fixed-computation
I was really hoping for a cogent argument for moral realism, and this is more a giant wall of text that is repetitive more than additive, and consists mostly of the same weak argument in multiple ways. “This sure feels wrong to me, and that’s probably universal”.
One issue that is downplayed is that it’s not at all clear whether or not the way professional moral philosophers think about these issues reflects how nonphilosophers think about them. The best current studies on how nonphilosophers think about these issues finds that when you explain metaethical positions to people, and give them the standard metaethical positions to choose from, about 75% favor antirealist positions. The participants in question were only in the United States and were sampled from populations more likely to be antirealists (including e.g., students), but the high levels of antirealism still raise serious questions about whether moral realists are correct when they presume all or most people find realism intuitive. There is little evidence this is the case, and quite strong evidence that it isn’t.
See, for instance, Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.
Yup. And it’s not clear professional philosophers are actually seeking disconfirmation of their theories. From outside, it seems a lot like they want to be mathematicians (seeking consistency and soundness, though in a domain that’s less concrete), rather than scientists or engineers (seeking reality or usage).
Very amusingly, philosophers who believe in moral realism should not CARE what most people think, right? It’s possible for every human and animal who’s ever lived to be factually incorrect about moral truths, right?
I certainly don’t think they’re seeking disconfirmation of their theories. Quite the contrary, much of analytic philosophy seems dedicated to starting with one’s conclusions, then coming up with justifications for why they are correct. That seems to be built into the very method. Have you read Bishop and Trout’s paper that makes this point?
Here it is: Bishop, M., & Trout, J. D. (2005). The pathologies of standard analytic epistemology. Nous, 39(4), 696-714.
And here’s a quote:
I haven’t read your post due to its extreme length, but to say something in response to your opening – I think much content on LW addresses the question of confidence contra putative experts on a field and high confidence often seems warranted. The most notable recent case is LW being ahead of the curve on Covid, but also see our latest curated post.
Could you link me to some of those posts. I wouldn’t agree with the heuristic ‘never disagree with experts’, but I’d generally—particularly in an area like philosophy—be wary of being super confident in a view that’s extremely controversial among the people that have most seriously studied it.
Sorry, short on time, can’t dig up links. Take a look at Inadequate Equilibria.
I think philosophy it might be less the case than any empirical field. Experts in biology have perhaps run experiments and seen the results, etc., whereas philosophy is arguments on the page that could easily be very detached from reality.
And “more time spent” has some value, but not that much. There are people who’ve spent 10x more time driving a car than me, but are much worse because they weren’t practicing and training the way I was. And more relevantly, you might say “yes, they’ve spent more time but they’re saying X, and X is clearly wrong, so I don’t trust them.
For philosophy, I think a major reason I distrust most philosophers is they’ve only been thinking about philosophy, whereas I think you’re a much better thinker when you’ve engaged with more of the world and more domains, e.g. your philosophical thinking gets better having studied physics and maths and biology and neuroscience, and most philosophers simply haven’t.
Well, Chalmers has studied maths. The fact that someone is currently employed as a philosopher doesn’t tell you much about their background, or side interests.
Trust , of course , is irrelevant. You should consider the arguments.
That would include the many untestable philosophical claims in the Sequences, of course.
Currently this comment has −2 agreement karma—why do people disagree with this idea?
The people who’ve most seriously studied philosophy of religion tend to be theists (69.5%), which is larger than the proportion of philosophers specializing in metaethics that endorse moral realism (65.4%). Do you think this is good evidence that theism is true? I don’t.
I think it’s the best argument for theism, though I would basically Moorean shift it because theism is so crazy. Also, there’s huge selection effects—studying POR makes people less religious.
Why do you think it’s the best argument for theism?
...Right, and what if selection effects are causing people more disposed to endorse moral realism to become academic philosophers? If that’s the case, the 62% moral realism among philosophers may also reflect selection effects, rather than philosophers being persuaded by the quality of the arguments.
The arguments about conciousness not being physical seem circular. If conciousness and experiences are physical, then you can’t make an exact copy of brain without it experiencing conciousness, and you can in-principle transfer experiences between brains (worst case, using nanotech).
It’s true that a physicalist would say that zombies are impossible. But that’s the point of the argument! It’s showing how one of the implications of physicalism is false—zombies are possible.
You might have independent reasons for thinking that zombies are impossible. But the mere fact that the argument is premised on the possibility of zombies doesn’t make it circular.
But where did you prove that zombies are possible? The only evidence you provide is that you can imagine them existing in a non-physicalist world-view.
You might not think that zombies are possible. But then that would be the problem with the argument. It’s not that the argument is circular. It’s that one of the premises is false (or unjustified).
Here how the argument works in a nutshell (or rather, doesn’t):
I can imagine that physicalism is wrong without noticing any contradiction → Physicalism is wrong
This doesn’t work, unless there actually is no contradiction. So we have to either implicitly assume that our inability to notice a contradiction is in general a true signal of there being no contradiction (which is false), or smuggle the whole assumption that in this specific case, our intuition is correct. But this would be begging the whole question, namely, we have to assume that physicalism is wrong in order to conclude that it’s wrong. Thus it’s circular reasoning
Here’s a kind of parody that one might run:
I reject that either physicalist or non-physicalist is necessarily making a circular argument. They just have different intuitions about whether zombies are possible. You might in fact think that zombies are impossible but then that’s the reason to reject the argument, not that it’s circular.
I completely agree!
If there was anti-zombie argument that clamed to prove physicalism true, the same way zombie argument claims to prove physicalism false, such argument would be circular! But the difference is that physicalsts make no such claim, while non-physicalists indeed do.
As long as you are non-physicalist who simply believes that zombies are possible, you are not making a circular argument. You are just being self-consistent, or even tautological, because “zombies are possible” is exactly the same statement as “physicalism is wrong”. But as soon as you claim that the fact that you believe zombies to be possible proves physicalism wrong—then you are making a circular argument.
Here’s another parody argument:
Is this argument circular? I assume not. But it seems to have the same structure as the zombie argument.
There is an ambiguity here, depending on what exactly is meant by “it seems”.
If we are talking about seeing some evidence of birds existing, then the argument is not circular, it is pointing to this evidence in the reality, which may or may not be enough to conclude that non-mammals exist. But neither this argument truly has the same structure as zombie argument.
If we are talking about being able to imagine that birds are possible, without any evidence, and thus concluding that birds are possible, then it would be structured as a zombie argument and be circular as you would have to smuggle in the assumption that your imagination correspond to reality in this specific case, namely that birds are indeed possible.
Let’s suppose that the zombie argument smuggles in the assumption that what you’re imagining is evidence of reality. Then the argument would look like this:
I can imagine zombies as possible.
My imagination is evidence of reality.
So I have evidence that zombies are possible.
The possibility of zombies is inconsistent with physicalism.
Therefore, I have evidence physicalism is false.
This still isn’t a circular argument. It’s just an argument with a false premise, namely premise 2.
More generally, if you think an argument lacks support, that doesn’t mean it’s circular.
Yes, you can remade a zombie argument so that it will not be circular and just be wrong or very weak. This isn’t the zombie argument in question, though.
How would you interpret the zombie argument so that it’s circular? Can you lay it out explicitly like above?
Intuition versus intuition isn’t much better than circular versus circular.
If zombie has additional physical law it’s not physically identical. More generally, what do you even mean for something to be not physical and have casual effect on physical world? If there is casual effect, you can have equations about it.
You either define knowledge such as it can communicate what it’s like to see red, or it also can’t communicate how to ride a bicycle. Either way it’s just confusion between knowing about state and being in a state, not an argument against physicality of bicycles or consciousness.
Only if it is susceptible to mathematical description.
Physicalism doesn’t imply that you get extra knowledge by personally instantiating something.
What isn’t?
Then either “what it’s like to see red” is not knowledge, like how to ride a bicycle, or this kind of physicalism can’t explains bicycles and you should use better one.
Qualia.
It isn’t. There is no reason it should be know-how
I think this post could be improved by including some quotations at the beginning that are representative of the dogmas of LessWrong. The Eliezer quotes are good, but I can’t recall any explicit posts about moral anti-realism on LessWrong.
1. Putting stock in philosophers
My general impression is that you put far too much stock in what a majority of philosophers think. While lots of people thinking something is some evidence that it’s true, and lots of “experts” thinking something is even better evidence, I have yet to hear a compelling account of why I should think philosophers are experts at reaching correct philosophical conclusions in a reliable and consistent way, across different issues.
And, in any case, there are a variety of reasons why we should seriously doubt that what amounts to barely over a 2:1 ratio of realists to antirealists is anything more than paltry evidence for realism. What matters is why philosophers endorse realism. Do they have good arguments? Does studying philosophy cause people to be realists? If it does, why does it do so? We don’t know enough about the base rate of endorsement of realism among the general population, we don’t know enough about whether self-selection effects cause people more disposed to endorse realism to become philosophers, we don’t know why they endorse realism, and so on. F
Furthermore, if you ask around, you’ll find philosophers commenting on how the rise of realism is fairly recent, and if you go back a few decades, most philosophers seemed to be moral antirealists (sadly, we don’t have PhilPapers surveys from the 20th century). If we go back even further, we might find most were moral realists. Realism has waxed and waned in popularity among philosophers. It’s unclear whether its popularity is due to good arguments or due to fashionable trends in the field.
I’d be curious to hear how much stock you think we should put in philosophers on these matters, and why. What kind of expertise do philosophers have? Why should we think that they are generally better at converging on correct conclusions about realism and consciousness than people on LessWrong?
2. Updating doesn’t change much in this case
One can grant that most philosophers endorse a position contrary to their positions, consider that some evidence for their views, and yet still be unconvinced. How strong of evidence do you take it to be that e.g., 62% of philosophers endorse moral realism? How much should that increase my confidence that moral realism is true? And why?
You endorse utilitarianism, even though most philosophers reject it, and possibly by a larger margin than they reject moral realism. Only 30.6% endorsed or leaned towards consequentialism, and only a subset of these would endorse utilitarianism. It’s not likely that the total number of philosophers who endorse utilitarianism is as high as the amount who endorse moral antirealism (~26%), since this would require almost all of those who endorse consequentialism to be utilitarians as well.
Presumably you take the majority rejection of utilitarianism as some evidence against it, but not enough to overturn your confidence in utilitarianism. Perhaps the same is true of people that lean towards antirealism and physicalist views of consciousness. It’s hard to know.
3. Dogmas
I also want to briefly flag that your title, “Two Dogmas of LessWrong,” seems to suggest that rejecting moral realism and antiphysicalist views of consciousness are “dogmas.” I’m not sure that they are. Yet you may want to consider that the prominence of both views among philosophers may likewise be dogmas or, more plausibly, there may be more foundational dogmas common among contemporary analytic philosophers that cause higher rates of realism and antiphysicalist views of consciousness than in the absence of those dogmas. I’m not idly speculating: I think this is in fact the case. I’m not the first to suggest this, and I won’t be the last.
There have been a variety of traditions and thinkers that have raised concerns about analytic philosophy’s methods, and it’s strange approach to language, concepts, metaphysics, and epistemology, from the pragmatists in the form of James and the more caustic FCS Schiller, to positivists, through Wittgenstein, ordinary language philosophers, and more recently experimental philosophers and others who have questions the ubiquity and reliance on intuitions among philosophers.
Whatever their flaws, in these various ways these thinkers and approaches have raised what I take to be very serious challenges to mainstream philosophical methods, challenges that I saw echoed in the critical stance many people associated with LessWrong took towards much of contemporary philosophy. I suspect the root of the problem isn’t realism and antiphysicalist views about consciousness, but analytic philosophy’s methods. If your methods aren’t any good, you’re going to end up with lots of people converging on bad ideas. Garbage in, garbage out. I should note that these are preliminary remarks, and I’m not attempting to make a more comprehensive case against the methods of contemporary philosophy. While that’s something I have done in passing in other comments, I’m more interested in a positive case for why we should put stock in contemporary analytic philosophy in the first place. It doesn’t strike me as having an especially good track record at solving problems.
I think this for pretty basic Auman’s agreement theorem reasons. While their methodology may be super wrong, so may be the methodology of LessWrongers.
I don’t know exactly, but I’d give it at least 10% odds even if it remained implausible sounding. I endorse utilitarianism, but peer disagreement dramatically reduces my confidence in it.
This was just a reference to Two Dogmas of Empiricism—one clearly need not be dogmatic to be an anti-realist—though I think there are lots of dogmatic anti-realists; as is no doubt also true of realism.
I understand if it’s a reference to Quine, but a title like that is still provocative and carries rhetorical weight. I see little reason in giving the impression that people are being “dogmatic” about something, and even less if you don’t actually think that. I’m also not sure how many readers are going to pick up on the reference, either (it wouldn’t surprise me if they did, I’m not sure one way or the other).
As one data point, I saw immediately what omnizoid was referencing. (But I don’t think omnizoid makes as good a case as Quine does.)
Fair enough. I still think find it somewhat unappealing to use a title that implies people are being dogmatic without providing much in the way of support for the implication. I’d prefer titles be accurate rather than clever.
I agree: accurate is better than clever. (And, for the avoidance of doubt, I wasn’t meaning to argue that omnizoid’s choice of title is a good one.)
I’m not sure whether I think it’s fair to call the two things omnizoid is complaining about “dogmas of LW”. Physicalism about consciousness is certainly pretty widely and confidently accepted around here. Moral nonrealism I’m not so sure about. It doesn’t seem entirely unreasonable to suggest that these things are viewed on LW in something like the way the analytic/synthetic distinction and reductionism were viewed among empiricist philosophers when Quine wrote “Two Dogmas”.
Quine’s paper is much more interesting than omnizoid’s because (1) he makes better arguments and (2) he is arguing for a thesis more like “this stuff is subtler than everyone thinks” than like “you guys are straightforwardly wrong and one of the standard alternatives to your view is correct instead” and actually bringing some new ideas to the table, which I don’t really think omnizoid is doing.
That’s fair. I can grant that. Like you, I’m less sure about the general attitude towards moral realism here. I’d have thought inclinations were more towards dissolve-the-dispute than a decidedly antirealist stance. I’d be interested in finding out more about people’s metaethical views on LW.
The salient point for LW is orthogonality thesis, not (alternatives to) moral realism. It’s not really a philosophical point, as it’s clearly possible in principle to build AIs that pursue arbitrary objectives (and don’t care about their moral status). A question closer to practical relevance is about the character of goals of the more likely first AGIs, both for initial goals and what they settle on eventually.
I agree with the orthogonality thesis, so no point disagreeing there. I’m not explaining the most widely held lesswrong beliefs—just a few that I strongly disagree with.
One issue with the post is that you didn’t convincingly point to what specifically you disagree with, as something meaningfully present on LW and not only independently described or gestured at in your post. You are making claims about what LW views are, but the claims are too far from being either clear or self-evident (in actually referring to something that’s really from LW) to stand on their own, without enough references/quotes to clarify what’s going on. (It’s an unnecessary issue, you could just describe your points, without framing them as a disagreement. Though to have a chance of meaningful engagement an LW post should be shorter and focused on fewer points.)
So I pointed to a real LW view that seems closest to what you are talking about, even though it’s clearly irrelevant to your post and isn’t what you discuss. I think LW views relevant to your post (those held by multiple people as common knowledge openly communicated here in particular) don’t say anything too surprising or specific, and are additionally confused on proper use of philosophical terms.
I didn’t want the post to be too long. I agree that not everyone on LessWrong agrees with this and exactly how prolific they are is an empirical matter that I have not investigated. However, my sense, having spent a lot of time around such people, is that they’re pretty common.
If it turns out the lesswrong is not anti-realistic, the post could have been half the length.
The most popular metaethics on lesswrong appears to be utilitarianism...but it’s unclear whether or not utilitarianism is a form of realism.
I think the crux is more about naturalism. Full strength moral realism , such as Platonism , is often explicitly anti naturalist.
Utilitarianism is a normative ethical view, not a meta-ethical view. I’m a utilitarian and a realist. One can be a utilitarian and adopt any meta-ethical view.
Of course not. It’s a form of consequentialism , so it’s metaethics. But it’s incomplete metaethics...it doesn’t specify realism versus anti realism.but, but it does specify other things.
Can you elaborate? Why is it a metaethical position because it’s a form of consequentialism?
Consequentialism, deontology etc are broad claims about ethics that aren’t object level ethics , like “thou shalt not kill”.
That’s true, but consequentialism, deontology, etc. are typically categorized as normative ethical theories, while claims like “don’t kill” are treated as first-order normative moral claims.
The term “metaethics” is typically used to refer to abstract issues about the nature of morality, e.g., whether there are moral facts. It is pretty much standard in contemporary moral philosophy to refer to consequentialism as a normative moral theory, not a metaethical one.
I don’t think there are correct or incorrect definitions, but describing consequentialism as a metaethical view is at least unconventional from the standpoint of how these terms are used in contemporary moral philosophy.
As omnizoid points out, utilitarianism is not a metaethical position. It is not a form of realism.
Eh. Constructivism, definitely. One should go over to EA forum if you want to find all the utilitarians :P
If the moral facts are causally inert, then your belief in the existence of moral facts can’t be caused by the moral facts!
If the Mathematical facts are causally inert....
Mathematical facts are facts about well-defined what-if scenarios. We evolved to be able to consider such scenarios because they often bear a resemblance to what happens to us. So there is an explanation for how our beliefs about mathematics could become correlated with mathematical truth, even though this explanation is not causal. However, it is not entirely obvious how to tell a similar story about moral truths—why did we evolve to be able to perceive moral facts, if indeed we did?
I’m not saying that we perceive mathematical facts. Rather that if there is a non perceptual.and therefore non causal epistemology for mathematics, there could be for other things.
Sure, “could be”.
They are not! If two plus two equals five, two apples and two more apples would add up to five apples.
But they aren’t causally inert, they’re part of causality! Our universe runs on mathematical laws, and mathematics is mostly just a description (or extrapolation) of them. If there were weird carve-out exceptions for 2+2=5 in our physics, they would very much be incorporated into our mathematics. If we lived in a universe where physics operated by different mathematical laws, then our conception of mathematics would be correspondingly different.
I think this refutes Platonism, but I’m not sure.
You seem to be claiming that it is possible for mathematical truths such as 2+2=5 to be other than what they are; I can agree with this on an epistemological level (since we don’t know all mathematical truths) but on on ontological level, no: mathematical truths are necessary truths. This is the conventional view though I’m not really sure how to argue it to a skeptic: but if you don’t see why 2+2=4 is a necessary truth then I claim you don’t truly comprehend why 2+2=4.
Reason can still allow us to discover the moral facts, even if the moral facts don’t cause something. If you have 13 cakes, you can’t divide them into two equal halves. The number 2 doesn’t cause this but it explains that feature of reality. See also the Enoch paper that I reference for more on this.
We can communicate the meaning of mathematical facts in ways you can’t communicate the meaning of irreducibly normative moral facts. The former are intelligible, the latter aren’t. So it’s not clear you can even present us with an intelligible set of propositions in the form of putative “moral facts” for us to entertain whether or not reason could allow us to discover them. “Discover what?” We can ask, and you won’t be able to intelligibly communicate what it is we’re supposedly discovering. The kind of moral realism you endorse isn’t merely false, it’s not even intelligible.
You and I have talked about this a lot before, so no need to rehash it.
Random remarks about consciousness:
I can’t see the difference. What exactly “metaphysically possible” means?
If in other world something else causes all the things that consciousness causes in our world, all your thoughts about consciousness provide no evidence that we aren’t in fact in that other world.
The linked article uses a totally different meaning of “dualism”. EM fields are 100% physical.
Linked post uses non-physicalism in the proof of non-physicalism (point 2).
Metaphysical possibility denotes whether something could actually occur. It’s a bit broader than logical possibility. The distinctions are a bit tricky and I’d recommend googling it if you’re interested to hear more—there are lots of commentators, so I’m leaving my comments brief.
My thoughts about consciousness—in the physical sense—don’t provide evidence for it, but consciousness which I directly access does.
EM fields are physical, but the claim is that the psychophysical laws would be empirically investigatable and may govern EM fields on dualism.
Of course , physical identity is taken to include laws as well as material structure,.and p-zombie proponents consider them possible, in some appropriate sense, in our world with our laws.
When I reflect on my pain, I notice that it’s the kind of thing I want to avoid. Other peoples pain doesn’t exist phenomenally for me...I don’t feel it. An ethical theory should modify my behaviour with regard to others somehow, so I don’t think you get one out of subjective feelings alone.
Same here. When I experience pain, I only notice that it’s something I want to avoid. I don’t think of it as “bad” in some stance-independent way. I don’t want other people to be in pain, either, but that isn’t part of my phenomenology. It’s a desire, or attitude that I have, and it has nothing to do with moral realism.
Note something else strange about the remark. It says “when we reflect on pain we conclude that...”
It’s strange to make a claim about what other people conclude. Who is “we” here? It’s not me, nor does it appear to be you. Yet for some reason we’re supposed to take the author’s phenomenology as evidence in favor of realism, yet phenomenology that doesn’t lend itself to realism (or even lends itself to antirealism) seems to be ignored.
“Bad” has a bunch of meanings, many of which are not morally relevant. A bad apple, or a bad movie are not moral wrongs.
There is a whole bunch of reasons to think that this isn’t a moral wrong:-
It’s not a an intentional act.
It’s not breaking any rules.
It’s nature’s way: The Dyno that gets its throat ripped out was having a bad day, but the other one is getting to feed.
But it’s a moral bad. A bad apple or bad movie is ineffective at being a movie or apple, but that’s not a moral notion.
Why is it a moral bad?
What’s a moral bad, as opposed to a nonmoral bad?
Stuff that’s intentional, justifies rewards and punishments, etc.
If someone treads on my toe deliberately, that pain is a moral bad. If someone treads on my toe accidentally, that pain isn’t. But there’s no difference in the quale.
This is too long to provide a detailed response. I am not sure you interpret “moral realism” the same way. To me, it means something like this:
Imagine universe without any humans (or any other sentient beings). From my perspective, talking about “morality” in such universe simply does not make sense, this word does not apply to anything that exists there.
(As a hypothetical alternative, if morality was somehow encoded in the laws of physics or something like that, things could possibly be moral or immoral even in a completely dead universe, like maybe a penis-shaped meteorite would be immoral, and a set of craters that coincidentally spell the name of God would be moral.)
The definition of “morality” is suspiciously aligned with some things that humans want. Humans avoid pain; causing pain is immoral. Humans cooperate in large groups; cooperation is moral, betrayal is immoral. Etc. This suggest that if another species evolved with different needs, its definition of morality would be somewhat different. Not completely arbitrary, because you have convergent instrumental goals (every evolved species would probably prefer survival over death, etc.). But an asexually reproducing species might have different intuitions about sexuality; a species that can upload their memories might have different intuition about physical death; a hive mind might have different intuitions about individualism and privacy; etc.
As a more crazy thought experiment, hypothetical beings living in an RPG game where any damage gives you experience points might have intuitions like “pain is good”, and their ideas of torture might include locking people in safe rooms with soft walls where they are unable to hurt themselves, and therefore never gain XP and never level up.
So I use “no moral realism” to mean that morality is somewhat species-dependent.
Depends on who does the talking. Why would presence of something in the universe influence the methodology of judging it (“can’t judge it”), rather than the result of a judgement (“it’s worthless”/”it has no moral relevance”)? (Sounds like corrigibility, a morality that is not in closed form and depends on environment.)
The absence of something can easily preclude the possibility of judging it.
Right. So for that to make sense, the things being judged are not the universe as a whole (or itself), but some sort of parts/aspects abstracted from it, objects of a different kind that are only relevant by somehow relating to it, perhaps “embedded” in it.
This is harder to set up as a guide to decision making, because consequences of actions or decisions are not as isolated from the rest of the universe, but I guess scoped consequentialism (goodhart/boundaries, mild optimization) would want to make some sense of this. Also, updateless decisions isolate abstractions of ignorance about current/future observations.
I agree that if there would no conscious beings there would be no morality, because I think the only good things are pleasurable brain states. I think humans often want things because they’re good. I replied in more detail to the evolutionary debunking argument in the article.
I notice that such terms as “real”, “objective” and their opposites are pretty bad at capturing the nuances of philosophical positions. It’s one of the issues of conventional philosophy, the lexicon is flawed thus there are these endless arguments about definitions.
Classical LW framework of map-territory distinction is more helpful here. Some elements of the map can be wrong—have no referent on the territory and serve no utility. Some can directly (1 to 1) reference the elements of the territory. Some can reference the elements of more detailed maps, be useful and make sense in the context of a map, have some kind of referent on the territory in principle, but in a convoluted way.
This framework isn’t perfect. But it’s better. Less wrong, if you will. And this is my general experience as a person who has been engaged with philosophy since my early teens. LW philosophy just seems to be generally better at actually resolving confusion.
Anyway. Here are couple of mistakes that you make.
We can’t accept that morality is real the same way we assume that reality is real. With reality we have an optimisation process, ensuring that our senses correspond to the outer world. With morality—not so much. There is no causal history to explain why our ethical feelings would correspond to some external moral truths of the universe.
Also, not sure whether you are already biting this bullet or not, but you have the same reasons to assume that aesthetics is real as with ethics.
As for zombie argument and qualia inversion, it was already mentioned in the comments that they are begging the question. Your attempt to make a stronger non-epiphenomenal version of zombism also fails. As soon as you add a new physical law that makes zombies behave as if they are consciousness without actually having consciousness in a zombie world, you have destroyed the symmetry between two words. Now we can’t say that they are physically the same. Btw, Eliezer mentioned this case, calling it zombie master.
(Disclaimer: didn’t read the post, it is too long and I doubted it would engage with my views.)
I’m not sure how popular moral anti-realism actually is here. For example, Eliezer’s position was technically moral realist, though his metaethics was kind of strange.
I’m not sure whether to classify myself as a moral realist or anti-realist. Regarding your litmus test “it’s wrong to torture babies for fun” I find myself saying that it’s true in a sense, but in a different sense than we normally use the word “true”. How important this difference is depends on whether we’re talking about object-level ethics (in which case you can pretty much ignore the difference) or metaethics (in which case I think the difference is pretty important). And, when asked to describe the difference, I would say that by calling a moral proposition true what we are primarily doing is advocating or condemning certain acts/people, rather than trying to create a correspondence between listeners’ beliefs and reality. (You can say we are technically doing the latter as well since “reality” can be taken to refer to moral facts as well as non-moral ones, but I think this is missing the point.) So am I a moral realist or an anti-realist?
That sounds like anti-realism—probably some type of quasi realism.
Can you elaborate on Eliezer being a moral realist? Is there a summary anywhere or could you provide one?
Regarding this statement: “it’s wrong to torture babies for fun,” this is a normative moral claim, not a metaethical one. A moral antirealist can agree with this (I’m an antirealist, and I agree with it). Nothing about agreeing or disagreeing with that claim entails realism.
Your position sounds like antirealism to me, but I’m not sure if it would fit with any of the standard categories. A lot hinges on your statement that:
If you were claiming that moral claims, despite appearing to be saying things that were true or false, were actually, instead, used to condemn acts/people, that would sound like some type of expressivism/noncognitivism, but since you’re also trying to maintain use of the term “true,” I’m not sure what to make of it. Omnizoid’s suggestion of quasi-realism makes some sense since part of the goal is to maintain the ability to say that one’s moral claims are true while still treating them as largely serving an expressive role; those accounts hinge on deflationary views of truth though and it doesn’t sound exactly like you’re endorsing that.
I think the central question would be: Do you think that there are facts about what people morally should or shouldn’t do, or what’s morally good or bad, that are true independent of people’s goals, standards, or values? If yes, that’s moral realism. If not, that’s moral antirealism.
https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right
In 2008, which is a very long time ago, Eliezer wrote, hugely paraphrased:
-”I think the central question would be: Do you think that there are facts about what people morally should or shouldn’t do, or what’s morally good or bad, that are true independent of people’s goals, standards, or values? If yes, that’s moral realism. If not, that’s moral antirealism.”
I certainly don’t believe that the truth of moral facts is dependent on people’s goals, standards, or values; the qualifier I would give is that our beliefs about moral facts are the same thing (tautologically) as our moral standards. So I guess I am a moral realist? Or maybe you are right that my position doesn’t fit into any of the standard categories. I guess it doesn’t matter, I was just curious...
-”Regarding this statement: “it’s wrong to torture babies for fun,” this is a normative moral claim, not a metaethical one. A moral antirealist can agree with this (I’m an antirealist, and I agree with it). Nothing about agreeing or disagreeing with that claim entails realism.”
Right, the litmus test was whether the statement is “true”. Sorry about being unclear.
I’m not sure if you’re a moral realist. What do you mean when you say this?
A moral realist may think that there are e.g., facts about what you should or shouldn’t do that you are obligated to comply with independent of whether doing so would be consistent with your goals, standards, or values. So, for instance, they would hold that you “should’t torture babies for fun,” regardless of whether doing so is consistent with your values. In doing so, they aren’t appeal to their own values, or anyone else’s values, but to facts about what’s morally right or wrong that are true without reference to, and in a way that doesn’t depend on, any particular evaluative standpoint.
-”In doing so, they aren’t appeal to their own values, or anyone else’s values, but to facts about what’s morally right or wrong that are true without reference to, and in a way that doesn’t depend on, any particular evaluative standpoint.”
OK, so now it sounds like I am not a moral realist! I definitely think that by making a moral claim you are appealing to other people’s values, since other people’s values is the only thing that could possibly cause them to accept your moral claim. However, the moral claim is still of the form “X is true regardless of whether it is consistent with anyone’s values”.
A moral realist would think that there are facts about what is morally right or wrong that are true regardless of what anyone thinks about them. One way to put this is that they aren’t made true by our desires, goals, standards, values, beliefs, and so on. Rather, they are true in a way more like how claims about e.g., the mass of an object are true. Facts about the mass of an object aren’t made true by our believing them or preferring them to be the case.
-”One way to put this is that they aren’t made true by our desires, goals, standards, values, beliefs, and so on.”
OK, I am a moral realist under this formulation.
-”Rather, they are true in a way more like how claims about e.g., the mass of an object are true.”
I guess it depends on what you mean by “in a way more like”. Moral claims are pretty fundamentally different from physical claims, I don’t see how to get around that—one way to put it would be that the notions of right and wrong are not inductive generalizations over observed phenomena—another way to put it would be that the question “what does it mean for something to be right or wrong” is meaningless, the only meaningful question is “what do we mean when we say that something is right or wrong” (to which the answer is “we do not mean anything, rather we speak to advocate or condemn something”). But if you are just referring to some surface-level similarity like “neither of them is actually a secret way of referring to the speaker’s beliefs/opinions/values”, then sure.
Moral realists are going to differ with respect to what they think the metaphysical status of the moral facts are. Moral naturalists may see them roughly as a kind of natural fact, so moral facts might be facts about e.g., increases in wellbeing, while non-naturalists would maintain that moral facts aren’t reducible to natural facts.
″...is wrong” rather than “is disapproved by me” hints at realism...but a lot of people are vague, and some people equivicatd deliberately.
I don’t think it hints at realism. Why do you think it does?
The wrongness is phrased as a one place predicate.
Why does that indicate realism? If someone says “Pizza is tasty” does that hint at gastronomic realism?
Yes. More so than “Pizza tastes good to me”.
Relativists can say
“it is wrong to torture babies for fun”
in the sense that they can slightly misrepresent their own views. Since they don’t believe in stance independent moral facts, an accurate rendition of their views would include the stance as one of the arguments two a two place predicate.
“According to my stance, it is wrong to torture babies for fun”.
But that statement gives no reason for anyone to change their mind
Okay. Thanks. With respect, I disagree. I do not think that claims like “murder is wrong” or “pizza is tasty” in any way imply or even hint at normative realism about the claims in question. Both claims are completely consistent with antirealism, and it’s not at all clear to me how either would indicate some form of normative realism.
I am not sure the reason you gave, that it’s phrased as a one place predicate, is any kind of substantive indication of realism. I can grant that:
”Murder is wrong,” is more consistent with, and more likely to be an expression of a realist stance than “I disapprove of murder,” but whether it is more consistent with what someone would say if they were a realist about the issue in question relative to some other remark that is less likely to express realism doesn’t indicate in absolute terms that it meaningfully hints at realism. However, nothing bars a normative realist from expressing subjective attitudes, and nothing bars an antirealist from employing conventional assertoric language to express subjective (or more generally nonrealist) evaluative standards or normative judgments.
For one thing, expressions of our preferences often exclude any explicit qualification that they are our preferences because in many contexts it would violate Gricean maxims to explicitly indicate that something is a preference, or an expression of our subjective attitudes. To the extent that most people aren’t gastronomic realists, a statement like “chocolate ice cream is delicious” doesn’t need ”...in my opinion” at the end, or “I consider” at the beginning because this is implicit. People may include such qualifications explicitly, but typically only in contexts in which e.g., some contextual goal is relevant, such as not offending someone with a contrary opinion, or to emphasize that you are stating a contrary opinion.
They are compatible with anti realism in just the sense that they are lossy, inaccurate renditions of it.
That would be true of casual conversati on, but not philosophical debate.
Why do you say they’re lossy or inaccurate renditions of it? My position on this is that statements like “murder is wrong” are simply normative claims, and they are in no way more indicative of realism or antirealism. I’m still not understanding why you think they’d indicate realism. Why presuppose that such statements have anything to do with expressing metanormative standards at all? They’re normative claims, not metaethical ones, and it’s not clear to me why we’d imagine a normative claim (i.e., a claim about something being right, wrong, permissible, impermissible, and so on) suggests any particular metanormative stance, unless such a stance were :
(a) explicitly accompanying the remark, e.g., “murder is objectively wrong”
(b) we had background knowledge about the speaker in question that would suggest they’re using that way, e.g., a moral realist says “murder is wrong”
or
(c) we had background knowledge about the degree to which such language was typically used to convey claims with particular metanormative presuppositions, e.g., we ran a bunch of surveys and discovered most people from the population the person is from are committed to moral realism
Without such information, I see no particular reason to presume such remarks hint at realism merely by examining the structure of the sentence.
I disagree. Such norms apply to philosophical conversations as well. For what it’s worth, I’m a moral antirealist and I use normative language all the time. I don’t think moral realists have any kind of monopoly on, or priority over, straightforward normative claims in any domain, because I don’t think normative claims hint at any particular metanormative standards.
Because they don’t make stance dependence explicit.
My claim is that normative claims have subtypes. “subjectively wrong” doesn’t mean what “objectively wrong” means. In a way that’s your position , too, since you think subjective wrongness exists and objective wrongness doesn’t. You’re not making an noncommital statement because you think there is nothing to choose between objectivity and subjectivity.
Metaethics, normative ethics , and object level ethics are different, but not entirely separate magisteria. For instance, utilitarianism implies that murder is sometimes justified. Likewise, realistic metaethics has implications for normative ethics ,not so much in terms of what is wrong , but in terms of how wrong it is.
If something is merely against your preferences, why should … actually, objectively …someone go to jail for it?
Which gets us back to the issue of slightly misrepresenting relativist views...leaving out the stance dependence makes the problem slightly harder to spot.
I don’t think the claims are stance independent, either, so I don’t think there’s any loss. In other words, I don’t think typical moral claims imply stance-independence or stance-dependence. They don’t imply or hint at any particular metaethical position at all. Why suppose that they do?
We don’t take causal claims like “It’s going to rain tomorrow” to imply a position on how to interpret quantum mechanics. Likewise, it may be that everyday moral claims are simply indeterminate with respect to metaethical presuppositions.
I agree. But I don’t think these categories and distinctions regularly figure into everyday normative and evaluative claims. They’re philosophical inventions, and have little to do with what ordinary moral and normative discourse is about. At any rate, to the extent that some form of these notions does manifest, I don’t think we can readily read it off of the superficial appearance of seemingly fact-stating claims just by examining the structure of toy moral sentences in the abstract. If we want to know what people are doing when they make moral claims, we should be doing empirical work that involves examining actual instances of usage, not hypothetical ones.
Depending on precisely what is meant by subjective wrongness, I don’t even believe some forms of that exist, either.
Sorry, not sure what you mean. Can you clarify or restate? The way I use moral and normative language is idiosyncratic and certainly doesn’t reflect ordinary usage. I’m discussing how other people use these terms, not how I use them. If someone wants to know how I use normative language I can just tell them. No need to speculate.
What do you mean when you say that metaethics has implications for how wrong something is?
I’m not sure if what you’re referring to is my criticism of Carroll’s remark, but my criticism is that he characterizes relativism in terms of agent rather than appraiser relativism.
If someone asks the awkward question “how do you know”, you need to drill down to something, if not all the way to QM, and not just repeat the claim.
Yes, and they are useful inventions, because they provide a justification for doing things based on ethics, such as putting people in jail. If someone asks the awkward question “why should people go to jail for that”, you can’t answer it just by saying it’s against your preferences.
The realist case against relativism consists of a positive claim, that realism works, and a negative claim, that relativism doesn’t. If the positive claim fails, that doesn’t mean by itself that the negative claim fails. The relativist still needs to show that relativism can do the required real-world lifting.
I’ve referred to the need to justify real world ethical practices many times, without hearing any response from yourself.
That’s where I am starting from.
What I think they are doing is trying to form alliances and make changes in the real world. As I have said many times. And I think they have good reasons to reject relativism as insufficiently committal. Even if realism isn’t the only alternative.