“There is objective morality” basically means that morality is part of physics and just like there are natural laws of, say, gravity or electromagnetism, there are natural laws of morals because the world just works that way. Consult e.g. Christian theology for details.
Think of a system where, for example, a yogin can learn to levitate (which is a physical phenomenon) given that he diligently practices and leads a moral life. If he diligently practices but does not lead a moral life, he doesn’t get to levitate. In such a system morality would be objective.
Note that this comment is not saying that objective morality exists, it just attempts to explain what the concept means.
Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it’s relevant, it gives you advantages you wouldn’t otherwise have. Though even in the sense you’ve described, I’m not sure whether the word ‘morality’ really seems applicable. If torturing people let us levitate, would we call that ‘objective morality’?
EDIT: To be clear, my intent isn’t to nitpick. I’m simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn’t obviously seem to equate those patterns with ‘morality’ in any sense of the word that I’m familiar with.
Hm. I’ll acknowledge that’s consistent (though I maintain that calling that ‘morality’ is fairly arbitrary), but I have to question whether that’s a charitable interpretation of what modern believers in objective morality actually believe.
If you actually believe that burning a witch has some chance of saving her soul from eternal burning in hell (or even only provide a sufficient incentive for others to not agree to pacts with Satan and so surrender their soul to eternal punishment), wouldn’t you be morally obligated to do it?
I mean the sufficiency of the definition given. Consider a universe which absolutely, positively, was not created by any sort of ‘god’, the laws of physics of which happen to be wired such that torturing people lets you levitate, regardless of whether the practitioner believes he has any sort of moral justification for the act. This universe’s physics are wired this way not because of some designer deity’s idea of morality, but simply by chance. I do not believe that most believers in objective morality would consider torturing people to be objectively good in this universe.
No. “There is an objective morality” means that moral claims have truth values that don’t depend on the mental content of the person making them, That is epistemic, and has nothing to .do with what, if anything, grounds them ontological. (I haven’t answered the question empirically, because I don’t think that’s useful)
Ethical objectivism can be grounded out in realism, either physical or metaphysical, but doesn’t have to be. Examples of objectivism without realism include utilitarianism, which only requires existing preferences, not some additional laws or properties. Other examples include ethics based on contracts, game theory, etc. These are somewhat analogous to things like economics, in that there are better and worse answers to problems, but they don’t get their truth values from straightforward correspondence to some territory,
It’s hard to disagree with Frank Jackson that moral facts supervene on physical facts—that (assuming physicalism) two universes couldn’t differ with respect to ethical facts unless they also differed in some physical facts. (So you can’t have to physically identical universes where something is wrong in one and the same thing is not wrong in the other.) That’s enough to get us objective morality, though it doesn’t help us at all with its content.
The way we de facto argue about objective morals is like this: If some theory leads to an ethically repugnant conclusion, then the theory is a bad candidate for the job of being the correct moral theory. Some conclusions are transparently repugnant, so we can reject the theories that entail them. But then there are conclusions whose repugnance itself is a matter of controversy. Also, there are many disagreements about whether consequence A is more or less repugnant than consequence B.
So the practice of philosophical arguments about values presumes some fairly unified basic intuitions about what counts as a repugnant conclusion, and then trying to produce a maximally elegant ethical theory that forces us to bite the fewest bullets. Human participants in such arguments have different temperaments and different priorities, but all have some gut feelings about when a proposed theory has gone off the rails. If we expect an AI to do real moral reasoning, I think it might also need to have some sense of the bounds. These bounds are themselves under dispute. For example, some Australian Utilitarians are infamous for their brash dismissal of certain ethical intuitions of ordinary people, declaring many such intuitions to simply be mistakes, insofar as they are inconsistent with Utilitarianism. And they have a good point: Human intuitions about many things can be wrong (folk psychology, folk cosmology, etc.). Why couldn’t the folk collectively make mistakes about ethics?
My worry is that our gut intuitions about ethics stem ultimately from our evolutionary history, and AIs that don’t share our history will not come equipped with these intuitions. That might leave them unable to get started with evaluating the plausibility of a candidate for a theory of ethics. If I correctly understand the debate of the last 2 weeks, it’s about acknowledging that we will need to hard-wire these ethical intuitions into an AI (in our case, evolution took care of the job). The question was: what intuitions should the AI start with, and how should they be programmed in? What if they take our human intuitions to be ethically arbitrary, and simply reject them once they’ve become superintellingent? Can we (or they) make conceptual sense of better intuitions about ethics than our folk intuitions—and in virtue of what would they be better?
We had better care about the content of objective morality—which is to say, we should all try to match our values to the correct values, even if the latter are difficult to figure out. And I certainly want any AI to feel the same way. Never should they be told: Don’t worry about what’s actually right, just act so-and-so. Becoming superintelligent might not be possible without deliberation about what’s actually right, and the AI would ideally have some sort of scaffolding for that kind of deliberation. A superintelligence will inevitably ask “why should I do what you tell me?” and we better have an answer in terms that make sense to the AI. But if it asks: “Why are you so confident that your meatbag folk intuitions about ethics are actually right?” that will be a hard thing to answer to anyone’s satisfaction. Still, I don’t know another way forward.
I don’t think there’s any coherent way to fulfill both parts of the antecedent “there is an objective morality, but we don’t care about it.” Instead you get “there is an objective mumblemumble, but we don’t care about it,” or else “here’s this morality business that obviously lots of people care about; how objective is it?”
Moral norms tend override other norms and the preferences, so definitionally, objective morality is what everyone should care about. However definitions don’t move atoms. I suspect this question conflates two issues...what is theoretically important about OM,and what we would be likely to do about it in practice.
we don’t talk about red lights for a train having made a moral decision. i don’t think even in AI it applies. if it does than i’d be worried about the humans who offload thinking-decision making to a machine mind. anyway that entity will never comprehend anything per se because it will never be sentient in the broadest sense. I can’t see it being an issue. Dropping the atom bomb didn’t worry anybody.
If there is an objective morality, but we don’t care about it, is it relevant in any way?
I have no idea what ‘there is an objective morality’ would mean, empirically speaking.
“There is objective morality” basically means that morality is part of physics and just like there are natural laws of, say, gravity or electromagnetism, there are natural laws of morals because the world just works that way. Consult e.g. Christian theology for details.
Think of a system where, for example, a yogin can learn to levitate (which is a physical phenomenon) given that he diligently practices and leads a moral life. If he diligently practices but does not lead a moral life, he doesn’t get to levitate. In such a system morality would be objective.
Note that this comment is not saying that objective morality exists, it just attempts to explain what the concept means.
Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it’s relevant, it gives you advantages you wouldn’t otherwise have. Though even in the sense you’ve described, I’m not sure whether the word ‘morality’ really seems applicable. If torturing people let us levitate, would we call that ‘objective morality’?
EDIT: To be clear, my intent isn’t to nitpick. I’m simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn’t obviously seem to equate those patterns with ‘morality’ in any sense of the word that I’m familiar with.
Sure, see e.g. good Christians burning witches.
Hm. I’ll acknowledge that’s consistent (though I maintain that calling that ‘morality’ is fairly arbitrary), but I have to question whether that’s a charitable interpretation of what modern believers in objective morality actually believe.
If you actually believe that burning a witch has some chance of saving her soul from eternal burning in hell (or even only provide a sufficient incentive for others to not agree to pacts with Satan and so surrender their soul to eternal punishment), wouldn’t you be morally obligated to do it?
I mean the sufficiency of the definition given. Consider a universe which absolutely, positively, was not created by any sort of ‘god’, the laws of physics of which happen to be wired such that torturing people lets you levitate, regardless of whether the practitioner believes he has any sort of moral justification for the act. This universe’s physics are wired this way not because of some designer deity’s idea of morality, but simply by chance. I do not believe that most believers in objective morality would consider torturing people to be objectively good in this universe.
I don’t think it needs to be in physics. It could be independent of and more general than physics, like math.
Yes, “physics” was probably unnecessarily too specific here. It’s more “this is how the world actually works”.
No. “There is an objective morality” means that moral claims have truth values that don’t depend on the mental content of the person making them, That is epistemic, and has nothing to .do with what, if anything, grounds them ontological. (I haven’t answered the question empirically, because I don’t think that’s useful)
Ethical objectivism can be grounded out in realism, either physical or metaphysical, but doesn’t have to be. Examples of objectivism without realism include utilitarianism, which only requires existing preferences, not some additional laws or properties. Other examples include ethics based on contracts, game theory, etc. These are somewhat analogous to things like economics, in that there are better and worse answers to problems, but they don’t get their truth values from straightforward correspondence to some territory,
What does ‘empiricism is correct’ mean empirically speaking?
I think Peter Singer wrote a paper arguing “no,” but I can’t find it at the moment.
Might be “The Objectivity of Ethics and the Unity of Practical Reason”.
It’s hard to disagree with Frank Jackson that moral facts supervene on physical facts—that (assuming physicalism) two universes couldn’t differ with respect to ethical facts unless they also differed in some physical facts. (So you can’t have to physically identical universes where something is wrong in one and the same thing is not wrong in the other.) That’s enough to get us objective morality, though it doesn’t help us at all with its content.
The way we de facto argue about objective morals is like this: If some theory leads to an ethically repugnant conclusion, then the theory is a bad candidate for the job of being the correct moral theory. Some conclusions are transparently repugnant, so we can reject the theories that entail them. But then there are conclusions whose repugnance itself is a matter of controversy. Also, there are many disagreements about whether consequence A is more or less repugnant than consequence B.
So the practice of philosophical arguments about values presumes some fairly unified basic intuitions about what counts as a repugnant conclusion, and then trying to produce a maximally elegant ethical theory that forces us to bite the fewest bullets. Human participants in such arguments have different temperaments and different priorities, but all have some gut feelings about when a proposed theory has gone off the rails. If we expect an AI to do real moral reasoning, I think it might also need to have some sense of the bounds. These bounds are themselves under dispute. For example, some Australian Utilitarians are infamous for their brash dismissal of certain ethical intuitions of ordinary people, declaring many such intuitions to simply be mistakes, insofar as they are inconsistent with Utilitarianism. And they have a good point: Human intuitions about many things can be wrong (folk psychology, folk cosmology, etc.). Why couldn’t the folk collectively make mistakes about ethics?
My worry is that our gut intuitions about ethics stem ultimately from our evolutionary history, and AIs that don’t share our history will not come equipped with these intuitions. That might leave them unable to get started with evaluating the plausibility of a candidate for a theory of ethics. If I correctly understand the debate of the last 2 weeks, it’s about acknowledging that we will need to hard-wire these ethical intuitions into an AI (in our case, evolution took care of the job). The question was: what intuitions should the AI start with, and how should they be programmed in? What if they take our human intuitions to be ethically arbitrary, and simply reject them once they’ve become superintellingent? Can we (or they) make conceptual sense of better intuitions about ethics than our folk intuitions—and in virtue of what would they be better?
We had better care about the content of objective morality—which is to say, we should all try to match our values to the correct values, even if the latter are difficult to figure out. And I certainly want any AI to feel the same way. Never should they be told: Don’t worry about what’s actually right, just act so-and-so. Becoming superintelligent might not be possible without deliberation about what’s actually right, and the AI would ideally have some sort of scaffolding for that kind of deliberation. A superintelligence will inevitably ask “why should I do what you tell me?” and we better have an answer in terms that make sense to the AI. But if it asks: “Why are you so confident that your meatbag folk intuitions about ethics are actually right?” that will be a hard thing to answer to anyone’s satisfaction. Still, I don’t know another way forward.
What’s your excuse for not caring about objective morality? What have you got to do that’s more important?
I don’t think there’s any coherent way to fulfill both parts of the antecedent “there is an objective morality, but we don’t care about it.” Instead you get “there is an objective mumblemumble, but we don’t care about it,” or else “here’s this morality business that obviously lots of people care about; how objective is it?”
Moral norms tend override other norms and the preferences, so definitionally, objective morality is what everyone should care about. However definitions don’t move atoms. I suspect this question conflates two issues...what is theoretically important about OM,and what we would be likely to do about it in practice.
we don’t talk about red lights for a train having made a moral decision. i don’t think even in AI it applies. if it does than i’d be worried about the humans who offload thinking-decision making to a machine mind. anyway that entity will never comprehend anything per se because it will never be sentient in the broadest sense. I can’t see it being an issue. Dropping the atom bomb didn’t worry anybody.