I’m not entirely sure what moral realism even gets you. Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes. I suspect the moral realism debate is confused altogether.
I’m not entirely sure what moral realism even gets you.
Here’s what I wrote in Six Plausible Meta-Ethical Alternatives: “Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.”
Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes.
In the above scenario, once you become intelligent enough and philosophically sophisticated enough, you’ll realize that your current attitudes are wrong (or right, as the case may be) and change them to better fit the relevant moral facts.
Most intelligent beings in the multiverse share similar preferences.
I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.
This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)
One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.
change them to better fit the relevant moral facts.
I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
There’s a misunderstanding/miscommunication here. I wasn’t suggesting “what preferences would best enable the emergence of an intergalactic civilization” are moral facts. Instead I was suggesting in that scenario that building an intergalactic civilization may require a certain amount of philosophical ability and willingness/tendency to be motivated by normative facts discovered through philosophical reasoning, and that philosophical ability could eventually enables that civilization to discover and be motivated by moral facts.
In other words, it’s [high philosophical ability/sophistication causes both intergalactic civilization and discovery of moral facts], not [discovery of “moral facts” causes intergalactic civilization].
Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.
I’m not entirely sure what moral realism even gets you.
It gets you something that error theory doesn’t get you , which is that moral claims have truth values. And it gets you something that subjectivism doesn’t get you, which is some people being actually wrong, and not just different to you.
Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes
That’s parallel to pointing out that people still have opinions when objective truth is available. People should believe the truth (this site, passim) and similarly should follow the true morality.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.
I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense?
It looks like some people can, since the attitudes of professional philosophers break down as:
Meta-ethics: moral realism 56.4%; moral anti-realism 27.7%; other 15.9%.
I can see how the conclusion would be difficult to reach if you make assumptions that are standard round here, such as
Morality is value
Morality is only value
All value is moral value.
But I suppose other people are making other assumptions.
Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
Some verdicts lead to jail sentences. If Alice does something that is against Bob’s subjective value system, and Bob does something that is against Alice’s subjective value system, who ends up in jail? Punishments are things that occur objectively, so need an objective justification.
Subjective ethics allows you to deliver a verdict in the sense of “tut-tutting”, but morality is something that connects up with laws and punishments, and that where subjectivism is weak.
To make Wei Dai’s answer more concrete, suppose something like the symmetry theory of valence is true; in that case, there’s a crisp, unambiguous formal characterization of all valence. Then add open individualism to the picture, and it suddenly becomes a lot more plausible that many civilizations converge not just towards similar ethics, but exactly identical ethics.
I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.
I’m not entirely sure what moral realism even gets you. Regardless of whether morality is “real” i still have attitudes towards certain behaviors and outcomes, and attitudes towards other people’s attitudes. I suspect the moral realism debate is confused altogether.
Here’s what I wrote in Six Plausible Meta-Ethical Alternatives: “Most intelligent beings in the multiverse share similar preferences. This came about because there are facts about what preferences one should have, just like there exist facts about what decision theory one should use or what prior one should have, and species that manage to build intergalactic civilizations (or the equivalent in other universes) tend to discover all of these facts. There are occasional paperclip maximizers that arise, but they are a relatively minor presence or tend to be taken over by more sophisticated minds.”
In the above scenario, once you become intelligent enough and philosophically sophisticated enough, you’ll realize that your current attitudes are wrong (or right, as the case may be) and change them to better fit the relevant moral facts.
I mean this could very well be true, but at best it points to some truths about convergent psychological evolution.
Sure, there are facts about what preferences would best enable the emergence of an intergalactic civilization. I struggle to see these as moral facts.
Also there’s definitely a manifest destiny evoking unquestioned moralizing of space exploration going on rn, almost like morality’s importance is only as an instrument to us becoming hegemonic masters of the universe. The angle you approached this question is value-laden in an idiosyncratic way (not in a particularly foreign way, here on less-wrong, but value-laden nonetheless)
One can recognize that one would be ”better off” with a different preference set without the alternate set being better in some objective sense.
I’m saying the self-reflective process that leads to increased parsimony between moral intuitions does not require objective realism of moral facts, or even the belief in moral realism. I guess this puts me somewhere between relativism and subjectivism according to your linked post?
There’s a misunderstanding/miscommunication here. I wasn’t suggesting “what preferences would best enable the emergence of an intergalactic civilization” are moral facts. Instead I was suggesting in that scenario that building an intergalactic civilization may require a certain amount of philosophical ability and willingness/tendency to be motivated by normative facts discovered through philosophical reasoning, and that philosophical ability could eventually enables that civilization to discover and be motivated by moral facts.
In other words, it’s [high philosophical ability/sophistication causes both intergalactic civilization and discovery of moral facts], not [discovery of “moral facts” causes intergalactic civilization].
Well, i struggle to articulate what exactly we disagree on, because I find no real issue with this comment. Maybe i would say “high philosophical ability/sophistication causes both intergalactic civilization and moral convergence.”? I hesitate to call the result of that moral convergence “moral fact,” though I can conceive of that convergence.
It gets you something that error theory doesn’t get you , which is that moral claims have truth values. And it gets you something that subjectivism doesn’t get you, which is some people being actually wrong, and not just different to you.
That’s parallel to pointing out that people still have opinions when objective truth is available. People should believe the truth (this site, passim) and similarly should follow the true morality.
uh… I guess cannot get around the regress involved in claiming my moral values superior to competing systems in an objective sense? I hesitate to lump together the same kind of missteps that are involved with a mistaken conception of reality (a mis-apprehension of non-moral facts) with whatever goes on internally when two people arrive at different values.
I think it’s possible to agree on all mind independent facts, without entailing perfect accord on all value propositions, and that moral reflection is fully possible without objective moral truth. Perhaps I do not get to point at a repulsive actor and say they are wrong in the strict sense of believing falsehoods, but i can deliver a verdict on their conduct all the same.
It looks like some people can, since the attitudes of professional philosophers break down as:
Meta-ethics: moral realism 56.4%; moral anti-realism 27.7%; other 15.9%.
I can see how the conclusion would be difficult to reach if you make assumptions that are standard round here, such as
Morality is value
Morality is only value
All value is moral value.
But I suppose other people are making other assumptions.
Some verdicts lead to jail sentences. If Alice does something that is against Bob’s subjective value system, and Bob does something that is against Alice’s subjective value system, who ends up in jail? Punishments are things that occur objectively, so need an objective justification.
Subjective ethics allows you to deliver a verdict in the sense of “tut-tutting”, but morality is something that connects up with laws and punishments, and that where subjectivism is weak.
To make Wei Dai’s answer more concrete, suppose something like the symmetry theory of valence is true; in that case, there’s a crisp, unambiguous formal characterization of all valence. Then add open individualism to the picture, and it suddenly becomes a lot more plausible that many civilizations converge not just towards similar ethics, but exactly identical ethics.
I’m immensely skeptical that open individualism will ever be more than a minority position (among humans, at least) But at any rate, convergence on an ethic doesn’t demonstrate objective correctness of that ethic from outside that ethic.