I think the standard LW argument for there being only one morality is based on the psychological unity of mankind. Human minds do not occupy an arbitrary or even a particularly large region of mindspace: the region they occupy is quite small for good reasons. Likewise, the moral theories that human minds adopt occupy quite a small region of moralityspace. The arguments around CEV suggest that these moral theories ought to converge if we extrapolate enough. I am not sure if this exact argument is defended in a LW post.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
See this comment. If Omega changed the attitudes of all people, that would change what those people mean when they say morality-in-our-world, but it would not change what I mean (here, in the real world rather than the counterfactual world) when I say morality-in-the-counterfactual-world, in the same way that if Omega changed the brains of all people so that the meanings of “red” and “yellow” were switched, that would change what those people mean when they say red, but it would not change what I mean when I say red-in-the-counterfactual-world.
I deal with exactly this issue in a post I made a while back (admittedly it is too long). It’s an issue of levels of recursion in our process of modelling reality (or a counterfactual reality). Your moral judgments aren’t dependent on the attitudes of people (including yourself) that you are modeling (in this world or in a counterfactual world): they’re dependent on the cognitive algorithms in your actual brain.
In other words, the subjectivist account of morality doesn’t say that people look at the attitudes of people in the world and then conclude from that what morality says. We don’t map attitudes and then conclude from those attitudes what is and is moral. Rather, we map the world and then out brains react emotionally to facts about that world and project our attitudes onto them. So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
The post I linked to has some diagrams that make this clearer.
As for the linked comment, I am unsure there is a single, distinct, and unchanging logical object to define—but if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
Right, but that is strong evidence that morality isn’t an externally existing object.
I’m not sure what you mean by this.
Real objects are subject to counterfactual alterations.
Yes, but logical objects aren’t.
...if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
I literally just edited my comment for clarity. It might make more sense now. I will edit this comment with a response to your point here.
Edit:
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
Any value system is a logical object. For that matter, any model of anything is a logical object. Any false theory of physics is a logical object. Theories of morality and of physics (logical objects both) are interesting because they purport to describe something in the world. The question before us is do normative theories purport to describe an object that is mind-independent or an object that is subjective?
Okay. I don’t think we actually disagree about anything. I just don’t know what you mean by “realist.”
So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
I was intrigued by a comment you made some time ago that you are not a realist, so you wonder what it is that everyone is arguing about. What is your position on ethical subjectivism?
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
So here is a generic definition of realism (in general, not for morality in particular
a, b, and c and so on exist, and the fact that they exist and have properties such as F-ness, G-ness, and H-ness is (apart from mundane empirical dependencies of the sort sometimes encountered in everyday life) independent of anyone’s beliefs, linguistic practices, conceptual schemes, and so on.
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
What is your position on ethical subjectivism?
I think it is less wrong than every variety of moral realism but I am unsure if moral claims are reports of subjective attitudes (subjectivism) or expressions of subjective attitudes (non-cognitivism). But I don’t think that distinction matters very much.
Luckily, I live in a world populated by entities who mostly concur with my attitudes regarding how the universe should be. This lets us cooperate and formalize procedures for determining outcomes that are convivial to our attitudes. But these attitudes are the result of a cognitive structure determined by natural selection and culture transmission, altered by reason and language. As such, they contain all manner of kludgey artifacts and heuristics that respond oddly to novel circumstances. So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures. In fact, Kant was emulating Newton.
Attitudes produced by human brains are going to be contextually inconsistent, subject to framing effects, unable to process most novel inputs, cluttered etc. What’s more, since our attitudes aren’t produced by a single, universal utility function but a cluster of heuristics, most moral disagreements are going to be the result of certain heuristics being more dominant in some people than others. That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive. Most of Less Wrong is robustly consequentialist because most people here share a particular cognitive style—we don’t have any grand insights into reality when it comes to normative theory.
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
I see, thanks for that distinction! I now need to reread parts of the metaethics sequence since I believe I came away with the thesis that morality is real in this sense… That is, that morality is real because we have bits of code (evolutionary, mental, etc) that output positive or negative feelings about different states of the universe and this code is “real” even if the positive and negative doesn’t exist external to that code.
So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures.
I agree...
That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive.
and I don’t disagree with this. I do hope/half expect that there should be some patterns to our attitudes, not as simplistic as natural laws but perhaps guessable to someone who thought about it the right way.
Thanks for describing your positions in more detail.
This sounds like ethical subjectivism (that ethical sentences are propositions about the attitudes of people). I’m quite amenable to ethical subjectivism but it’s an anti-realist position.
See this comment. If Omega changed the attitudes of all people, that would change what those people mean when they say morality-in-our-world, but it would not change what I mean (here, in the real world rather than the counterfactual world) when I say morality-in-the-counterfactual-world, in the same way that if Omega changed the brains of all people so that the meanings of “red” and “yellow” were switched, that would change what those people mean when they say red, but it would not change what I mean when I say red-in-the-counterfactual-world.
I deal with exactly this issue in a post I made a while back (admittedly it is too long). It’s an issue of levels of recursion in our process of modelling reality (or a counterfactual reality). Your moral judgments aren’t dependent on the attitudes of people (including yourself) that you are modeling (in this world or in a counterfactual world): they’re dependent on the cognitive algorithms in your actual brain.
In other words, the subjectivist account of morality doesn’t say that people look at the attitudes of people in the world and then conclude from that what morality says. We don’t map attitudes and then conclude from those attitudes what is and is moral. Rather, we map the world and then out brains react emotionally to facts about that world and project our attitudes onto them. So morality doesn’t change in a world where people’s attitudes change because you’re using the same brain to make moral judgments about the counterfactual world as you use to make moral judgments about this world.
The post I linked to has some diagrams that make this clearer.
As for the linked comment, I am unsure there is a single, distinct, and unchanging logical object to define—but if there is one I agree with the comment and think that defining the algorithm that produces human attitudes is a crucial project. But clearly an anti-realist one.
Edit: rewrote for clarity.
I’m not sure what you mean by this.
Yes, but logical objects aren’t.
If I said “when we talk about Peano arithmetic, we are referring to a logical object. If counterfactually Peano had proposed a completely different set of axioms, that would change what people in the counterfactual world mean by Peano arithmetic, but it wouldn’t change what I mean by Peano-arithmetic-in-the-counterfactual-world,” would that imply that I’m not a mathematical Platonist?
I literally just edited my comment for clarity. It might make more sense now. I will edit this comment with a response to your point here.
Edit:
Any value system is a logical object. For that matter, any model of anything is a logical object. Any false theory of physics is a logical object. Theories of morality and of physics (logical objects both) are interesting because they purport to describe something in the world. The question before us is do normative theories purport to describe an object that is mind-independent or an object that is subjective?
Okay. I don’t think we actually disagree about anything. I just don’t know what you mean by “realist.”
Yes, that sounds right.
OK, suppose that this is an anti-realism position. People’s attitudes exist, but this isn’t what we mean by morality existing. Is that how it follows as an anti-realist position?
I was intrigued by a comment you made some time ago that you are not a realist, so you wonder what it is that everyone is arguing about. What is your position on ethical subjectivism?
So here is a generic definition of realism (in general, not for morality in particular
E.g. A realist position on ghosts doesn’t include the position that “ghost” is a kind of hallucination people have even though there is something that exists there.
I think it is less wrong than every variety of moral realism but I am unsure if moral claims are reports of subjective attitudes (subjectivism) or expressions of subjective attitudes (non-cognitivism). But I don’t think that distinction matters very much.
Luckily, I live in a world populated by entities who mostly concur with my attitudes regarding how the universe should be. This lets us cooperate and formalize procedures for determining outcomes that are convivial to our attitudes. But these attitudes are the result of a cognitive structure determined by natural selection and culture transmission, altered by reason and language. As such, they contain all manner of kludgey artifacts and heuristics that respond oddly to novel circumstances. So I find it weird that anyone thinks they can be described by something like preference utilitarianism of Kantian deontology. Those are the kind of parsimonious, elegant theories that we expect to find governing natural laws, not culturally and biologically evolved structures. In fact, Kant was emulating Newton.
Attitudes produced by human brains are going to be contextually inconsistent, subject to framing effects, unable to process most novel inputs, cluttered etc. What’s more, since our attitudes aren’t produced by a single, universal utility function but a cluster of heuristics, most moral disagreements are going to be the result of certain heuristics being more dominant in some people than others. That makes these grand theories about these attitudes silly to argue about: positions aren’t determined by things in the universe or by logic. They’re determined by the cognitive styles of individuals and the cultural conditioning they receive. Most of Less Wrong is robustly consequentialist because most people here share a particular cognitive style—we don’t have any grand insights into reality when it comes to normative theory.
I see, thanks for that distinction! I now need to reread parts of the metaethics sequence since I believe I came away with the thesis that morality is real in this sense… That is, that morality is real because we have bits of code (evolutionary, mental, etc) that output positive or negative feelings about different states of the universe and this code is “real” even if the positive and negative doesn’t exist external to that code.
I agree...
and I don’t disagree with this. I do hope/half expect that there should be some patterns to our attitudes, not as simplistic as natural laws but perhaps guessable to someone who thought about it the right way.
Thanks for describing your positions in more detail.