I usually treat this behavior as something similar to the availability heuristic.
That is, there’s a theory that one of the ways humans calibrate our estimates of the likelihood of an event X is by trying to imagine an instance of X, and measuring how long that takes, and calculating our estimate of probability inverse-proportionally to the time involved. (This process is typically not explicitly presented to conscious awareness.) If the imagined instance of X is immediately available, we experience high confidence that X is true.
That mechanism makes a certain amount of rough-and-ready engineering sense, though of course it has lots of obvious failure modes, especially as you expand the system’s imaginative faculties. Many of those failure modes are frequently demonstrated in modern life.
The thing is, we use much of the same machinery that we evolved for considering events like “a tiger eats my children” to consider pseudo-events like “a tiger eating my children is a bad thing.” So it’s easy for us to calibrate our estimates of the likelihood that a tiger eating my children is a bad thing in the same way: if an instance of a tiger eating my children feeling like a bad thing is easy for me to imagine, I experience high confidence that the proposition is true. It just feels obvious.
I don’t think this is quite the same thing as moral realism, but when that judgment is simply taken as an input without being carefully examined, the result is largely equivalent.
Conversely, the more easily I can imagine a tiger eating my children not feeling like a bad thing, the lower that confidence. More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.
What I expect, given the above, is that if I want to shake someone off that kind of naive moral realist position, it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments. The more I do this, the less strongly the availability heuristic fires, and over time this will weaken that leg of their implicit moral realism, even if I never engage with it directly.
I’ve known a number of people who react very very negatively to being invited to consider such situations, though, even if they don’t clearly perceive it as an attack on their moral confidence.
More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.
it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments
But philosophers are extremely fond of analysis, and make great use of trolley problems and similar edge cases. I’m really torn—people who seem very smart and skilled in reasoning take positions that seem to make no sense. I keep telling myself that they are probably right and I’m wrong, but the more I read about their justifications, the less convincing they are...
Yeah, that’s fair. Not all philosophers do this, any more than all computer programmers come up with test cases to ensure their code is doing what it ought, but I agree it’s a common practice.
Can you summarize one of those positions as charitably as you’re able to? It might be that given that someone else can offer an insight that extends that structure.
“There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them”.
The arguments seem mainly to be:
1) Playing around with the meaning of rationality until you get something (“any rational being would realise their own pleasure is no more valid than that of others” or “pleasure is the highest principle, and any rational being would agree with this, or else be irrational”)
2) Convergence among human values.
3) Moral progress for society: we’re better than we used to be, so there needs to be some scale to measure the improvements.
4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we’re getting better a moral reasoning, so these is some scale on which to measure this.
5) Playing around with the definition of “truth-apt” (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like “my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me”.
6) Realising things like “if you can’t be money pumped, you must be an expected utility maximiser”, which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).
OK, so let me start out by saying that my position is similar to yours… that is, I think most of this is nonsense. But having said that, and trying to adopt the contrary position for didactic purposes… hm.
So, a corresponding physical-realist assertion might be that there are sets of objective physical structures such that any rational being that perceived the evidence for them would be compelled to infer their existence. (Yes?)
Now, why might one believe such a thing? Well, some combination of reasons 2-4 seems to capture it.
That is: in practice, there at least seem to be physical structures we all infer from our senses such that we achieve more well-being with less effort when we act as though those structures existed. And there are other physical structures that we infer the existence of via a more tenuous route (e.g., the center of the Earth, or Alpha Centauri, or quarks, or etc.), to which #2 doesn’t really apply (most people who believe in quarks have been taught to believe in them by others; they mostly didn’t independently converge on that belief), but 3 and 4 do… when we posit the existence of these entities, we achieve worthwhile things that we wouldn’t achieve otherwise, though sometimes it’s very difficult to express clearly what those things actually are. (Yes?)
So… ok. Does that case for physical realism seem compelling to you? If so, and if arguments 2-4 are sufficient to compel a belief in physical realism, why are their analogs insufficient to compel a belief in moral realism?
So… ok. Does that case for physical realism seem compelling to you?
No—to me it just highlights the difference between physical facts and moral facts, making them seem very distinct. But I can see how if we had really strong 2-4, it might make more sense...
I’m not quite sure I understood you. Are you saying “no,” that case for physical realism doesn’t seem compelling to you? Or are you saying “no,” the fact that such a case can compellingly be made for physical realism does not justify an analogous case for moral realism?
So, given a moral realist, Sam, who argued as follows:
“We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism.”
...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.
Interesting perspective. Is this an old argument, or a new one? (seems vaguely similar to the Pascalian “act as if you believe, and that will be better for you”).
It might be formalisable in terms of bounded agents and stuff. What’s interesting is that though it implies moral realism, it doesn’t imply the usual consequence of moral realism (that all agents converge on one ethics). I’d say I understood Sam’s position, and that he has no grounds to disbelieve orthogonality!
I’d be astonished if it were new, but I’m not knowingly quoting anyone.
As for orthogonality.. well, hm. Continuing the same approach… suppose Sam says to you:
“I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about physical systems, both coarse-grained (e.g., “I’m holding a rock”) and fine-grained (e.g. some corresponding statement about quarks or configuration spaces or whatever). I believe that precisely because I’m a de facto physical realist; whatever it is about the universe that constrains our experiences such that we achieve more well-being with less effort when we act as though certain statements about the physical world are true and other statements are not, I believe that’s an intersubjective property—the things that it is best for me to believe about the physical world are also the things that it is best for you to believe about the physical world, because that’s just what it means for both of us to be living in the same real physical world.
For precisely the same reasons, I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about moral systems.”
1) Evidence. There is a general convergence on physical facts, but nothing like a convergence on moral facts. Also, physcial facts, since science, are progressive (we don’t say Newton was wrong, we say we have a better theory of which his was an approximation to).
2) Evidence. We have established what counts as evidence for a physical theory (and have, to some extent, separated it from simply “everyone believes this”). What then counts as evidence for a moral theory?
Awesome! So, reversing this, if you want to understand the position of a moral realist, it sounds like you could consider them in the position of a physical realist before the Enlightenment.
There was disagreement then about underlying physical theory, and indeed many physical theories were deeply confused, and the notion of evidence for a physical theory was not well-formalized, but if you asked a hundred people questions like “is this a rock or a glass of milk?” you’d get the same answer from all of them (barring weirdness), and there were many physical realists nevertheless based solely on that, and this is not terribly surprising.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness), so it ought not be surprising that there are many moral realists based on that.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness)
I think there may be enough “weirdness” in response to moral questions that it would be irresponsible to treat it as dismissible.
Interesting. I have no idea if this is actually how moral realists think, but it does give me a handle so that I can imagine myself in that situation...
Sure, agreed. I suspect that actual moral realists think in lots of different ways. (Actual physical realists do, too.) But I find that starting with an existence-proof of “how might I believe something like this?” makes subsequent discussions easier.
From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it’s traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)
I usually treat this behavior as something similar to the availability heuristic.
That is, there’s a theory that one of the ways humans calibrate our estimates of the likelihood of an event X is by trying to imagine an instance of X, and measuring how long that takes, and calculating our estimate of probability inverse-proportionally to the time involved. (This process is typically not explicitly presented to conscious awareness.) If the imagined instance of X is immediately available, we experience high confidence that X is true.
That mechanism makes a certain amount of rough-and-ready engineering sense, though of course it has lots of obvious failure modes, especially as you expand the system’s imaginative faculties. Many of those failure modes are frequently demonstrated in modern life.
The thing is, we use much of the same machinery that we evolved for considering events like “a tiger eats my children” to consider pseudo-events like “a tiger eating my children is a bad thing.” So it’s easy for us to calibrate our estimates of the likelihood that a tiger eating my children is a bad thing in the same way: if an instance of a tiger eating my children feeling like a bad thing is easy for me to imagine, I experience high confidence that the proposition is true. It just feels obvious.
I don’t think this is quite the same thing as moral realism, but when that judgment is simply taken as an input without being carefully examined, the result is largely equivalent.
Conversely, the more easily I can imagine a tiger eating my children not feeling like a bad thing, the lower that confidence. More generally, the more I actually analyze (rather than simply referencing) my judgments, the less compelling this mechanism becomes.
What I expect, given the above, is that if I want to shake someone off that kind of naive moral realist position, it helps to invite them to consider situations in which they arrive at counterintuitive (to them) moral judgments. The more I do this, the less strongly the availability heuristic fires, and over time this will weaken that leg of their implicit moral realism, even if I never engage with it directly.
I’ve known a number of people who react very very negatively to being invited to consider such situations, though, even if they don’t clearly perceive it as an attack on their moral confidence.
But philosophers are extremely fond of analysis, and make great use of trolley problems and similar edge cases. I’m really torn—people who seem very smart and skilled in reasoning take positions that seem to make no sense. I keep telling myself that they are probably right and I’m wrong, but the more I read about their justifications, the less convincing they are...
Yeah, that’s fair. Not all philosophers do this, any more than all computer programmers come up with test cases to ensure their code is doing what it ought, but I agree it’s a common practice.
Can you summarize one of those positions as charitably as you’re able to? It might be that given that someone else can offer an insight that extends that structure.
“There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them”. The arguments seem mainly to be:
1) Playing around with the meaning of rationality until you get something (“any rational being would realise their own pleasure is no more valid than that of others” or “pleasure is the highest principle, and any rational being would agree with this, or else be irrational”)
2) Convergence among human values.
3) Moral progress for society: we’re better than we used to be, so there needs to be some scale to measure the improvements.
4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we’re getting better a moral reasoning, so these is some scale on which to measure this.
5) Playing around with the definition of “truth-apt” (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like “my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me”.
6) Realising things like “if you can’t be money pumped, you must be an expected utility maximiser”, which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).
(nods) Nice.
OK, so let me start out by saying that my position is similar to yours… that is, I think most of this is nonsense. But having said that, and trying to adopt the contrary position for didactic purposes… hm.
So, a corresponding physical-realist assertion might be that there are sets of objective physical structures such that any rational being that perceived the evidence for them would be compelled to infer their existence. (Yes?)
Now, why might one believe such a thing? Well, some combination of reasons 2-4 seems to capture it.
That is: in practice, there at least seem to be physical structures we all infer from our senses such that we achieve more well-being with less effort when we act as though those structures existed. And there are other physical structures that we infer the existence of via a more tenuous route (e.g., the center of the Earth, or Alpha Centauri, or quarks, or etc.), to which #2 doesn’t really apply (most people who believe in quarks have been taught to believe in them by others; they mostly didn’t independently converge on that belief), but 3 and 4 do… when we posit the existence of these entities, we achieve worthwhile things that we wouldn’t achieve otherwise, though sometimes it’s very difficult to express clearly what those things actually are. (Yes?)
So… ok. Does that case for physical realism seem compelling to you?
If so, and if arguments 2-4 are sufficient to compel a belief in physical realism, why are their analogs insufficient to compel a belief in moral realism?
No—to me it just highlights the difference between physical facts and moral facts, making them seem very distinct. But I can see how if we had really strong 2-4, it might make more sense...
I’m not quite sure I understood you. Are you saying “no,” that case for physical realism doesn’t seem compelling to you? Or are you saying “no,” the fact that such a case can compellingly be made for physical realism does not justify an analogous case for moral realism?
The second one!
So, given a moral realist, Sam, who argued as follows:
“We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism.”
...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.
No?
Interesting perspective. Is this an old argument, or a new one? (seems vaguely similar to the Pascalian “act as if you believe, and that will be better for you”).
It might be formalisable in terms of bounded agents and stuff. What’s interesting is that though it implies moral realism, it doesn’t imply the usual consequence of moral realism (that all agents converge on one ethics). I’d say I understood Sam’s position, and that he has no grounds to disbelieve orthogonality!
I’d be astonished if it were new, but I’m not knowingly quoting anyone.
As for orthogonality.. well, hm. Continuing the same approach… suppose Sam says to you:
“I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about physical systems, both coarse-grained (e.g., “I’m holding a rock”) and fine-grained (e.g. some corresponding statement about quarks or configuration spaces or whatever). I believe that precisely because I’m a de facto physical realist; whatever it is about the universe that constrains our experiences such that we achieve more well-being with less effort when we act as though certain statements about the physical world are true and other statements are not, I believe that’s an intersubjective property—the things that it is best for me to believe about the physical world are also the things that it is best for you to believe about the physical world, because that’s just what it means for both of us to be living in the same real physical world.
For precisely the same reasons, I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about moral systems.”
You consider that reasoning ungrounded. Why?
1) Evidence. There is a general convergence on physical facts, but nothing like a convergence on moral facts. Also, physcial facts, since science, are progressive (we don’t say Newton was wrong, we say we have a better theory of which his was an approximation to).
2) Evidence. We have established what counts as evidence for a physical theory (and have, to some extent, separated it from simply “everyone believes this”). What then counts as evidence for a moral theory?
Awesome! So, reversing this, if you want to understand the position of a moral realist, it sounds like you could consider them in the position of a physical realist before the Enlightenment.
There was disagreement then about underlying physical theory, and indeed many physical theories were deeply confused, and the notion of evidence for a physical theory was not well-formalized, but if you asked a hundred people questions like “is this a rock or a glass of milk?” you’d get the same answer from all of them (barring weirdness), and there were many physical realists nevertheless based solely on that, and this is not terribly surprising.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness), so it ought not be surprising that there are many moral realists based on that.
I think there may be enough “weirdness” in response to moral questions that it would be irresponsible to treat it as dismissible.
Yes, there may well be.
Interesting. I have no idea if this is actually how moral realists think, but it does give me a handle so that I can imagine myself in that situation...
Sure, agreed.
I suspect that actual moral realists think in lots of different ways. (Actual physical realists do, too.)
But I find that starting with an existence-proof of “how might I believe something like this?” makes subsequent discussions easier.
I could add: Objective punishments and rewards need objective justification.
From my perspective, treating rationality as always instrumental, and never a terminal value is playing around with it’s traditional meaning. (And indiscriminately teaching instrumental rationality is like indiscriminately handing out weapons. The traditional idea, going back to st least Plato, is that teaching someone to be rational improves them...changes their values)