There is an objective sense in which actions have consequences. I am always surprised when people seem to think I’m denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define “wrong” as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we’re in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
Note that the definitional dispute rears its head in the case where the humans say, “Exterminating us is morally wrong!” in which case strong moral relativists insist the aliens should respond, “No, exterminating you is morally right!”, while moral realists insist the aliens should respond “We don’t care that it’s morally wrong—it’s shmorally right!”
There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren’t many smart aliens to test.
That doesn’t seem relevant. I was noting cases of what the aliens should say based on what they apparently wanted to communicate. I was thus assuming they were speaking truthfully in each case.
In other words, in a world where strong moral relativism was true, it would be true that the aliens were doing something morally right by exterminating humans according to “their morality”. In a world where moral realism is true, it would be false that the aliens were doing something morally right by exterminating humans, though it might still be the case that they’re doing something ‘shmorally’ right, where morality is something we care about and ‘shmorality’ is something they care about.
And there is no sense in which either party has an objective “rightness” that the other > lacks. They are each referring to the utility functions they care about.
There is a sense in which one party is objectively wrong. The aliens do not want
to be exterminated so they should not exterminate.
So, we’re working with thomblake’s definition of “wrong” as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.
Let’s break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
The aliens do not want to be exterminated so they should not exterminate.
… of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You’re saying that the truth of X implies that they “should not exterminate”. What does the word should mean there?
You’re signalling to me right now that you have no desire to have a productive conversation. I don’t know if you’re meaning to do that, but I’m not going to keep asking questions if it seems like you have no intent to answer them.
I.m busy, I’ve answered it several times before, and you can look it up yourself, eg;
“Now we can return to the “special something” that makes a maxim a moral maxim. For Kant it was the maxim’s universalizability. (Note that universalizability is a fundamentally different concept than universality, which refers to the fact that some thing or concept not only should be found everywhere but actually is. However, the two concepts sometimes flow into each other: human rights are said to be universal not in the sense that they are actually conceptualized and respected in all cultures but rather in the sense that reason requires that they should be. And this is a moral “should.”) However, in the course of developing this idea, Kant actually developed several formulations of the Categorical Imperative, all of which turn on the idea of universalizability. Commentators usually list the following five versions:
“Act only according to a maximum that at the same time you could will that it should become a universal law.” In other words, a moral maxim is one that any rationally consistent human being would want to adopt and have others adopt it. The above-mentioned maxim of lying when doing so is to one’s advantage fails this test, since if there were a rule that everyone should lie under such circumstances no one would believe them – which of course is utterly incoherent. Such a maximum destroys the very point of lying.
“Act as if the maxim directing your action should be converted, by your will, into a universal law of nature.” The first version showed that immoral maxims are logically incoherent. The phrase “as if” in this second formulation shows that they are also untenable on empirical grounds. Quite simply, no one would ever want to live in a world that was by its very nature populated only by people living according to immoral maxims.
“Act in a way that treats all humanity, yourself and all others, always as an end, and never simply as a means.” The point here is that to be moral a maxim must be oriented toward the preservation, protection and safeguarding of all human beings, simply because they are beings which are intrinsically valuable, that is to say ends in themselves. Of course much cooperative activity involves “using” others in the weak sense of getting help from them, but moral cooperation always includes the recognition that those who help us are also persons like ourselves and not mere tools to be used to further our own ends.
“Act in a way that your will can regard itself at the same time as making universal law through its maxim.” This version is much like the first one, but it adds the important link between morality and personal autonomy: when we act morally we are actually making the moral law that we follow.
“Act as if by means of your maxims, you were always acting as universal legislator, in a possible kingdom of ends.” Finally, the maxim must be acceptable as a norm or law in a possible kingdom of ends. This formulation brings together the ideas of legislative rationality, universalizability, and autonomy. ”
You mean, “The aliens do not want to be exterminated, so the aliens would prefer that the maxim ‘exterminate X’, when universally quantified over all X, not be universally adhered to.”?
Well… so what? I assume the aliens don’t care about universalisable rules, since they’re in the process of exterminating humanity, and I see no reason to care about such either. What makes this more ‘objective’ than, say, sorting pebbles into correct heaps?
Okay, we don’t disagree at all.
There is an objective sense in which actions have consequences. I am always surprised when people seem to think I’m denying this. Science works, there is a concrete and objective reality, and we can with varying degrees of accuracy predict outcomes with empirical study. Zero disagreement from me on that point.
So, we judge consequences of actions with our preferences. One can be empirically incorrect about what consequences an action can have, and if you choose to define “wrong” as those actions which reduce the utility of whatever function you happen to care about, then sure, we can determine that objectively too. All I am saying is that there is no objective method for selecting the function to use, and it seems like we’re in agreement on that.
Namely, we privilege utility functions which value human life only because of facts about our brains, as shaped by our genetics, evolution, and experiences. If an alien came along and saw humans as a pest to be eradicated, we could say:
“Exterminating us is wrong!”
… and the alien could say:
“LOL. No, silly humans. Exterminating you is right!”
And there is no sense in which either party has an objective “rightness” that the other lacks. They are each referring to the utility functions they care about.
Note that the definitional dispute rears its head in the case where the humans say, “Exterminating us is morally wrong!” in which case strong moral relativists insist the aliens should respond, “No, exterminating you is morally right!”, while moral realists insist the aliens should respond “We don’t care that it’s morally wrong—it’s shmorally right!”
There is also a breed of moral realist who insists that the aliens would have somehow also evolved to care about morality, as the Kantians who believe morality follows necessarily from basic reason. I think the burden of proof still falls on them for that, but unfortunately there aren’t many smart aliens to test.
The aliens could say it’s morally right. since no amount of realism/objectivism stops one being able to make false statements.
That doesn’t seem relevant. I was noting cases of what the aliens should say based on what they apparently wanted to communicate. I was thus assuming they were speaking truthfully in each case.
In other words, in a world where strong moral relativism was true, it would be true that the aliens were doing something morally right by exterminating humans according to “their morality”. In a world where moral realism is true, it would be false that the aliens were doing something morally right by exterminating humans, though it might still be the case that they’re doing something ‘shmorally’ right, where morality is something we care about and ‘shmorality’ is something they care about.
There is a sense in which one party is objectively wrong. The aliens do not want to be exterminated so they should not exterminate.
So, we’re working with thomblake’s definition of “wrong” as those actions which reduce utility for whatever function an agent happens to care about. The aliens care about themselves not being exterminated, but may actually assign very high utility to humans being wiped out.
Perhaps we would be viewed as pests, like rats or pigeons. Just as humans can assign utility to exterminating rats, the aliens could do so for us.
Exterminating humans has the objectively determinable outcome of reducing the utility in your subjectively privileged function.
Inasmuch as we are talking about objective rightness we are talking are not talking about utility functions, because not everyone is running of the same utility function, and it makes sense to say some UFs are objectively wrong.
What would it mean for a utility function to be objectively wrong? How would one determine that a utility function has the property of “wrongness”?
Please, do not answer “by reasoning about it” unless you are willing to provide that reasoning.
I did provide the reasoning in the alien example.
Let’s break this all the way down. Can you give me your thesis?
I mean, I see there is a claim here:
… of the format (X therefore Y). I can understand what the (X) part of it means: aliens with a preference not to be destroyed. Now the (Y) part is a little murky. You’re saying that the truth of X implies that they “should not exterminate”. What does the word should mean there?
It means universalisable rules.
You’re signalling to me right now that you have no desire to have a productive conversation. I don’t know if you’re meaning to do that, but I’m not going to keep asking questions if it seems like you have no intent to answer them.
I.m busy, I’ve answered it several times before, and you can look it up yourself, eg;
“Now we can return to the “special something” that makes a maxim a moral maxim. For Kant it was the maxim’s universalizability. (Note that universalizability is a fundamentally different concept than universality, which refers to the fact that some thing or concept not only should be found everywhere but actually is. However, the two concepts sometimes flow into each other: human rights are said to be universal not in the sense that they are actually conceptualized and respected in all cultures but rather in the sense that reason requires that they should be. And this is a moral “should.”) However, in the course of developing this idea, Kant actually developed several formulations of the Categorical Imperative, all of which turn on the idea of universalizability. Commentators usually list the following five versions:
“Act only according to a maximum that at the same time you could will that it should become a universal law.” In other words, a moral maxim is one that any rationally consistent human being would want to adopt and have others adopt it. The above-mentioned maxim of lying when doing so is to one’s advantage fails this test, since if there were a rule that everyone should lie under such circumstances no one would believe them – which of course is utterly incoherent. Such a maximum destroys the very point of lying.
“Act as if the maxim directing your action should be converted, by your will, into a universal law of nature.” The first version showed that immoral maxims are logically incoherent. The phrase “as if” in this second formulation shows that they are also untenable on empirical grounds. Quite simply, no one would ever want to live in a world that was by its very nature populated only by people living according to immoral maxims.
“Act in a way that treats all humanity, yourself and all others, always as an end, and never simply as a means.” The point here is that to be moral a maxim must be oriented toward the preservation, protection and safeguarding of all human beings, simply because they are beings which are intrinsically valuable, that is to say ends in themselves. Of course much cooperative activity involves “using” others in the weak sense of getting help from them, but moral cooperation always includes the recognition that those who help us are also persons like ourselves and not mere tools to be used to further our own ends.
“Act in a way that your will can regard itself at the same time as making universal law through its maxim.” This version is much like the first one, but it adds the important link between morality and personal autonomy: when we act morally we are actually making the moral law that we follow.
“Act as if by means of your maxims, you were always acting as universal legislator, in a possible kingdom of ends.” Finally, the maxim must be acceptable as a norm or law in a possible kingdom of ends. This formulation brings together the ideas of legislative rationality, universalizability, and autonomy. ”
You mean, “The aliens do not want to be exterminated, so the aliens would prefer that the maxim ‘exterminate X’, when universally quantified over all X, not be universally adhered to.”?
Well… so what? I assume the aliens don’t care about universalisable rules, since they’re in the process of exterminating humanity, and I see no reason to care about such either. What makes this more ‘objective’ than, say, sorting pebbles into correct heaps?