So, given a moral realist, Sam, who argued as follows:
“We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism.”
...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.
Interesting perspective. Is this an old argument, or a new one? (seems vaguely similar to the Pascalian “act as if you believe, and that will be better for you”).
It might be formalisable in terms of bounded agents and stuff. What’s interesting is that though it implies moral realism, it doesn’t imply the usual consequence of moral realism (that all agents converge on one ethics). I’d say I understood Sam’s position, and that he has no grounds to disbelieve orthogonality!
I’d be astonished if it were new, but I’m not knowingly quoting anyone.
As for orthogonality.. well, hm. Continuing the same approach… suppose Sam says to you:
“I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about physical systems, both coarse-grained (e.g., “I’m holding a rock”) and fine-grained (e.g. some corresponding statement about quarks or configuration spaces or whatever). I believe that precisely because I’m a de facto physical realist; whatever it is about the universe that constrains our experiences such that we achieve more well-being with less effort when we act as though certain statements about the physical world are true and other statements are not, I believe that’s an intersubjective property—the things that it is best for me to believe about the physical world are also the things that it is best for you to believe about the physical world, because that’s just what it means for both of us to be living in the same real physical world.
For precisely the same reasons, I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about moral systems.”
1) Evidence. There is a general convergence on physical facts, but nothing like a convergence on moral facts. Also, physcial facts, since science, are progressive (we don’t say Newton was wrong, we say we have a better theory of which his was an approximation to).
2) Evidence. We have established what counts as evidence for a physical theory (and have, to some extent, separated it from simply “everyone believes this”). What then counts as evidence for a moral theory?
Awesome! So, reversing this, if you want to understand the position of a moral realist, it sounds like you could consider them in the position of a physical realist before the Enlightenment.
There was disagreement then about underlying physical theory, and indeed many physical theories were deeply confused, and the notion of evidence for a physical theory was not well-formalized, but if you asked a hundred people questions like “is this a rock or a glass of milk?” you’d get the same answer from all of them (barring weirdness), and there were many physical realists nevertheless based solely on that, and this is not terribly surprising.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness), so it ought not be surprising that there are many moral realists based on that.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness)
I think there may be enough “weirdness” in response to moral questions that it would be irresponsible to treat it as dismissible.
Interesting. I have no idea if this is actually how moral realists think, but it does give me a handle so that I can imagine myself in that situation...
Sure, agreed. I suspect that actual moral realists think in lots of different ways. (Actual physical realists do, too.) But I find that starting with an existence-proof of “how might I believe something like this?” makes subsequent discussions easier.
The second one!
So, given a moral realist, Sam, who argued as follows:
“We agree that humans typically infer physical facts such that we achieve more well-being with less effort when we act as though those facts were actual, and that this constitutes a compelling case for physical realism. It seems to me that humans typically infer moral facts such that we achieve more well-being with less effort when we act as though those facts were actual, and I consider that an equally compelling case for moral realism.”
...it seems you ought to have a pretty good sense of why Sam is a moral realist, and what it would take to convince Sam they were mistaken.
No?
Interesting perspective. Is this an old argument, or a new one? (seems vaguely similar to the Pascalian “act as if you believe, and that will be better for you”).
It might be formalisable in terms of bounded agents and stuff. What’s interesting is that though it implies moral realism, it doesn’t imply the usual consequence of moral realism (that all agents converge on one ethics). I’d say I understood Sam’s position, and that he has no grounds to disbelieve orthogonality!
I’d be astonished if it were new, but I’m not knowingly quoting anyone.
As for orthogonality.. well, hm. Continuing the same approach… suppose Sam says to you:
“I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about physical systems, both coarse-grained (e.g., “I’m holding a rock”) and fine-grained (e.g. some corresponding statement about quarks or configuration spaces or whatever). I believe that precisely because I’m a de facto physical realist; whatever it is about the universe that constrains our experiences such that we achieve more well-being with less effort when we act as though certain statements about the physical world are true and other statements are not, I believe that’s an intersubjective property—the things that it is best for me to believe about the physical world are also the things that it is best for you to believe about the physical world, because that’s just what it means for both of us to be living in the same real physical world.
For precisely the same reasons, I believe that any two sufficiently intelligent, sufficiently rational systems will converge on a set of confidence levels in propositions about moral systems.”
You consider that reasoning ungrounded. Why?
1) Evidence. There is a general convergence on physical facts, but nothing like a convergence on moral facts. Also, physcial facts, since science, are progressive (we don’t say Newton was wrong, we say we have a better theory of which his was an approximation to).
2) Evidence. We have established what counts as evidence for a physical theory (and have, to some extent, separated it from simply “everyone believes this”). What then counts as evidence for a moral theory?
Awesome! So, reversing this, if you want to understand the position of a moral realist, it sounds like you could consider them in the position of a physical realist before the Enlightenment.
There was disagreement then about underlying physical theory, and indeed many physical theories were deeply confused, and the notion of evidence for a physical theory was not well-formalized, but if you asked a hundred people questions like “is this a rock or a glass of milk?” you’d get the same answer from all of them (barring weirdness), and there were many physical realists nevertheless based solely on that, and this is not terribly surprising.
Similarly, there is disagreement today about moral theory, and many moral theories are deeply confused, and the notion of evidence for a moral theory is not well-formalized, but if you ask a hundred people questions like “is killing an innocent person right or wrong?” you’ll get the same answer from all of them (barring weirdness), so it ought not be surprising that there are many moral realists based on that.
I think there may be enough “weirdness” in response to moral questions that it would be irresponsible to treat it as dismissible.
Yes, there may well be.
Interesting. I have no idea if this is actually how moral realists think, but it does give me a handle so that I can imagine myself in that situation...
Sure, agreed.
I suspect that actual moral realists think in lots of different ways. (Actual physical realists do, too.)
But I find that starting with an existence-proof of “how might I believe something like this?” makes subsequent discussions easier.