Speaking for myself (though I think many other rationalists think similarly), I approach this question with a particular mindset that I’m not sure how to describe exactly, but I would like to gesture at with some notes (apologies if all of these are obvious, but I want to get them out there for the sake of clarity):
As Sean Carroll would say, there are different “ways of talking” about phenomena, on different levels of abstraction. In physics, we use the lowest level (and talk about quantum fields or whatever) when we want to be maximally precise, but that doesn’t mean that higher level emergent properties don’t exist. (Just because temperature is an aggregate property of fast moving particles, doesn’t mean that heat isn’t “real”.) And it would be a total waste of time not to use the higher level concepts when discussing higher level phenomena (e.g. temperature, pressure, color, consciousness, etc.)
Various intuitive properties that we would like systems to have may turn out to be impossible, either individually, or together. Consider Arrow’s theorem for voting systems, or Gödel’s incompleteness theorems. Does the existence of these results mean that no voting system is better than any other? Or that formal systems are all useless? No, but they do mean that we may have to abandon previous ideas we had about finding the one single correct voting procedure, or axiomatic system. We shouldn’t stop talking about whether a statement is provable, but, if we want to be precise, we should clarify which formal system we’re using when we ask the question.
Phenomena that a folk or intuitive understanding sees as one thing, often turn out to be two (or more) things on careful inspection, or to be meaningless in certain contexts. E.g. my compass points north. But if I’m in Greenland, where it points, and the place where the rotational axis of the earth meets the surface, aren’t the same thing anymore. And if I’m in space, there just is no north anymore (or up, for that matter).
When you go through an ontological shift, and discover that the concepts you were using to make sense of the world aren’t quite right, you don’t have to just halt, melt, and catch fire. It doesn’t mean that all of your past conclusions were wrong. As Eliezer would say, you can rescue the utility function.
This state of having leaky abstractions, and concepts that aren’t quite right, is the default. It is rare that an intuitive or folk concept survives careful analysis unmodified. Maybe whole numbers would be an example that’s unmodified. But even there, our idea of what a ‘number’ is is very different from what people thought a thousand years ago.
With all that in mind as background, when I come to the question of morality or normativity, it seems very natural to me that one might conclude that there is no single objective rule, or set of rules or whatever, that exactly matches our intuitive idea of “shouldness”.
Does that mean I can’t say which of two actions is better? I don’t think so. It means that when I do, I’m probably being a bit imprecise, and what I really mean is some combination of the emotivist statement referenced in the post, plus a claim about what consequences will follow from the action, combined with an implicit expression of belief about how my listeners will feel about those consequences, etc.
I think basically all of the examples in the post of rationalists using normative language can be seen as examples of this kind of shorthand. E.g. saying that one should update one’s credences according to Bayes’s rule is shorthand for saying that this procedure will produce the most accurate beliefs (and also that I, the speaker, believe it is in the listener’s best interest to have accurate beliefs, and etc.).
For me it seems like a totally natural and unsurprising state of affairs for someone to both believe that there is no single precise definition of normativity that perfectly matches our folk understanding of shouldness (or that otherwise is the objectively “correct” morality), and also for that person to go around saying that one should do this or that, or that something is the right thing to do.
Similarly, if your physicist friend says that two things happened at the same time, you don’t need to play gotcha and say, “Ah, but I thought you said there was no such thing as absolute simultaneity.” You just assume that they actually mean a more complex statement, like “Approximately at the same time, assuming the reference frame of someone on the surface of the Earth.”
A folk understanding of morality might think it’s defined as:
what everyone in their hearts knows is right
what will have the best outcomes for me personally in the long run
what will have the best outcomes for the people I care about
what God says to do
what makes me feel good to do after I’ve done it
what other people will approve of me having done
And then it turns out that there just isn’t any course of action, or rule for action, that satisfies all those properties.
My bet is that there just isn’t any definition of normativity that satisfies all the intuitive properties we would like. But that doesn’t mean that I can’t go around meaningfully talking about what’s right in various situations, anymore than the fact that the magnetic pole isn’t exactly on the axis of rotation means that I can’t point in a direction if someone asks me which way is north.
I’m not sure if my position would be considered “moral anti-realist”, but if so, it seems to me a bit like calling Einstein a “space anti-realist”, or a “simultaneity anti-realist”. Einstein says that there is space, and there is simultaneity. They just don’t match our folk concepts.
I feel like my position is more like, “we actually mean a bunch of different related things when we use normative language and many of those can be discussed as matters of objective fact” than “any discussion of morality is vacuous”.
Does that just mean I’m an anti-realist (or naturalist realist?) and not an error theorist?
EDIT: after following the link in the footnotes to Luke’s post on Pluralistic Moral Reductionism, it seems like I am just advocating the same position.
EDIT2: But given that the author of this post was aware of that post, I’m surprised that he thought rationalist’s use of normative statements was evidence of contradiction (or tension), rather than of using normative language in a variety of different ways, as in Luke’s post. Does any of the tension survive if you assume the speakers are pluralistic moral reductionists?
I’m not sure if my position would be considered “moral anti-realist”, but if so, it seems to me a bit like calling Einstein a “space anti-realist”, or a “simultaneity anti-realist”. Einstein says that there is space, and there is simultaneity. They just don’t match our folk concepts.
That’s a great way to describe it. I think this is completely normal for anti-realists (at least in EA and rationality). Somehow the realists rarely seem to pass the Ideological Turing Test for anti-realism (of course, similar things can be said for the other direction and I think Ben Garfinkel’s post explains really well some of the intuitions that anti-realists might be missing, or ways in which some might simplify their picture).
Quite related: The Wikipedia page on Anti-realism was recently renamed to “Nihilism.” While that’s ultimately just semantics, I think this terminological move is insane. It’s a bit as though the philosophers who believe in Libertarian Free Will had conspired to only use the term “Fatalism” for both Determinism and Compatibilism.
Re-posting a link here, on the off-chance it’s of interest despite its length. ESRogs and I also had a parallel discussion on the EA Forum, which led me to write up this unjustifiably lengthy doc partly in response to that discussion and partly in response to the above comment.
Thanks for this! My thinking is similar (I have an early draft about why realists and anti-realists diagree with one another, and have been trying to get closer to passing the Ideological Turing Test for realism. It was good to be able to compare my thinking to that of someone with stronger sympathies toward realism!)
Speaking for myself (though I think many other rationalists think similarly), I approach this question with a particular mindset that I’m not sure how to describe exactly, but I would like to gesture at with some notes (apologies if all of these are obvious, but I want to get them out there for the sake of clarity):
Abstractions tend to be leaky
As Sean Carroll would say, there are different “ways of talking” about phenomena, on different levels of abstraction. In physics, we use the lowest level (and talk about quantum fields or whatever) when we want to be maximally precise, but that doesn’t mean that higher level emergent properties don’t exist. (Just because temperature is an aggregate property of fast moving particles, doesn’t mean that heat isn’t “real”.) And it would be a total waste of time not to use the higher level concepts when discussing higher level phenomena (e.g. temperature, pressure, color, consciousness, etc.)
Various intuitive properties that we would like systems to have may turn out to be impossible, either individually, or together. Consider Arrow’s theorem for voting systems, or Gödel’s incompleteness theorems. Does the existence of these results mean that no voting system is better than any other? Or that formal systems are all useless? No, but they do mean that we may have to abandon previous ideas we had about finding the one single correct voting procedure, or axiomatic system. We shouldn’t stop talking about whether a statement is provable, but, if we want to be precise, we should clarify which formal system we’re using when we ask the question.
Phenomena that a folk or intuitive understanding sees as one thing, often turn out to be two (or more) things on careful inspection, or to be meaningless in certain contexts. E.g. my compass points north. But if I’m in Greenland, where it points, and the place where the rotational axis of the earth meets the surface, aren’t the same thing anymore. And if I’m in space, there just is no north anymore (or up, for that matter).
When you go through an ontological shift, and discover that the concepts you were using to make sense of the world aren’t quite right, you don’t have to just halt, melt, and catch fire. It doesn’t mean that all of your past conclusions were wrong. As Eliezer would say, you can rescue the utility function.
This state of having leaky abstractions, and concepts that aren’t quite right, is the default. It is rare that an intuitive or folk concept survives careful analysis unmodified. Maybe whole numbers would be an example that’s unmodified. But even there, our idea of what a ‘number’ is is very different from what people thought a thousand years ago.
With all that in mind as background, when I come to the question of morality or normativity, it seems very natural to me that one might conclude that there is no single objective rule, or set of rules or whatever, that exactly matches our intuitive idea of “shouldness”.
Does that mean I can’t say which of two actions is better? I don’t think so. It means that when I do, I’m probably being a bit imprecise, and what I really mean is some combination of the emotivist statement referenced in the post, plus a claim about what consequences will follow from the action, combined with an implicit expression of belief about how my listeners will feel about those consequences, etc.
I think basically all of the examples in the post of rationalists using normative language can be seen as examples of this kind of shorthand. E.g. saying that one should update one’s credences according to Bayes’s rule is shorthand for saying that this procedure will produce the most accurate beliefs (and also that I, the speaker, believe it is in the listener’s best interest to have accurate beliefs, and etc.).
For me it seems like a totally natural and unsurprising state of affairs for someone to both believe that there is no single precise definition of normativity that perfectly matches our folk understanding of shouldness (or that otherwise is the objectively “correct” morality), and also for that person to go around saying that one should do this or that, or that something is the right thing to do.
Similarly, if your physicist friend says that two things happened at the same time, you don’t need to play gotcha and say, “Ah, but I thought you said there was no such thing as absolute simultaneity.” You just assume that they actually mean a more complex statement, like “Approximately at the same time, assuming the reference frame of someone on the surface of the Earth.”
A folk understanding of morality might think it’s defined as:
what everyone in their hearts knows is right
what will have the best outcomes for me personally in the long run
what will have the best outcomes for the people I care about
what God says to do
what makes me feel good to do after I’ve done it
what other people will approve of me having done
And then it turns out that there just isn’t any course of action, or rule for action, that satisfies all those properties.
My bet is that there just isn’t any definition of normativity that satisfies all the intuitive properties we would like. But that doesn’t mean that I can’t go around meaningfully talking about what’s right in various situations, anymore than the fact that the magnetic pole isn’t exactly on the axis of rotation means that I can’t point in a direction if someone asks me which way is north.
I’m not sure if my position would be considered “moral anti-realist”, but if so, it seems to me a bit like calling Einstein a “space anti-realist”, or a “simultaneity anti-realist”. Einstein says that there is space, and there is simultaneity. They just don’t match our folk concepts.
I feel like my position is more like, “we actually mean a bunch of different related things when we use normative language and many of those can be discussed as matters of objective fact” than “any discussion of morality is vacuous”.
Does that just mean I’m an anti-realist (or naturalist realist?) and not an error theorist?
EDIT: after following the link in the footnotes to Luke’s post on Pluralistic Moral Reductionism, it seems like I am just advocating the same position.
EDIT2: But given that the author of this post was aware of that post, I’m surprised that he thought rationalist’s use of normative statements was evidence of contradiction (or tension), rather than of using normative language in a variety of different ways, as in Luke’s post. Does any of the tension survive if you assume the speakers are pluralistic moral reductionists?
That’s a great way to describe it. I think this is completely normal for anti-realists (at least in EA and rationality). Somehow the realists rarely seem to pass the Ideological Turing Test for anti-realism (of course, similar things can be said for the other direction and I think Ben Garfinkel’s post explains really well some of the intuitions that anti-realists might be missing, or ways in which some might simplify their picture).
Quite related: The Wikipedia page on Anti-realism was recently renamed to “Nihilism.” While that’s ultimately just semantics, I think this terminological move is insane. It’s a bit as though the philosophers who believe in Libertarian Free Will had conspired to only use the term “Fatalism” for both Determinism and Compatibilism.
Re-posting a link here, on the off-chance it’s of interest despite its length. ESRogs and I also had a parallel discussion on the EA Forum, which led me to write up this unjustifiably lengthy doc partly in response to that discussion and partly in response to the above comment.
Thanks for this! My thinking is similar (I have an early draft about why realists and anti-realists diagree with one another, and have been trying to get closer to passing the Ideological Turing Test for realism. It was good to be able to compare my thinking to that of someone with stronger sympathies toward realism!)