Here’s an argument for something that might be called speciesism. though it isn’t strictly speciesism because moral consideration could be extended to hypothetical non-human beings (though no currently known ones) and not quite to all humans—contractarianism. We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it. When these criteria are fulfilled, a being has rights and should not be harmed, but otherwise, we have no reason to restrict ourselves in our dealings with it.
Indeed, consistently applied, this view would deny rights to both non-human animals and some human individuals, so it wouldn’t be speciesist. There is however another problem with contractarianism: I think the way it is usually presented is blatantly not thought through and non sequitur.
We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it.
What do you mean by “we have reason”?
If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn’t mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest. So there seems to be a (crucial!) step missing here. And if all one is arguing for is to “do whatever is in your rational self-interest”, why give it a misleading name like contractarianism?
There is always the option to say “I don’t care about others”. Apart from the ingenuous argument about personal identity that implies that your own future selves should also classify among “others”, there is not much one can say to such a person. Such a person would refuse to act along with the outcome specified by the axiome of impartiality/altruism in the ethics game. You may play the ethics game intellectually and come to the conclusion that systematized altruism implies some variety of utilitarianism (and then define more terms and hash out details), but you can still choose to implement another utility function in your own actions. The two dimensions are separate, I think.
If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn’t mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest.
True, but it would be in their rational self-interest to retaliate if their rights aren’t being respected, to create a credible threat so their rights would be respected.
if all one is arguing for is to “do whatever is in your rational self-interest”, why give it a misleading name like contractarianism?
It’s not a misleading name, it means that morality is based on contracts. It’s more specific than “do whatever in your rational self-interest”, as it suggests something that someone who is following their self-interest should do. Also, not everyone who advocates following one’s rational self-interest is a contractarian.
You’d need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level… But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have “rights”.
It’s more specific than “do whatever in your rational self-interest”, as it suggests something that someone who is following their self-interest should do.
There are two plausible cases I can see here:
1) The suggestions collides with “do whatever is in your rational self-interest”; in which case it was misleading.
2) The suggestions deductively follows from “do whatever is in your rational self-interest”; in which case it is uninteresting (and misleading because it dresses up as some fancy claim).
You seem to mean:
3) The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
it means that morality is based on contracts.
What do you mean by “morality”? Unless you rigorously define such controversial and differently used terms at every step, you’re likely to get caught up in equivocations.
Here are two plausible interpretations for “morality” in the partial sentence I quoted I can come up with:
1) people’s desire to (sometimes) care about the interests of others / them following that desire
2) people’s (system two) reasoning for why they end up doing nice/fair things to others
Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have “moral biases”, e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality.
In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of “morality” according to the above.
So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier:
as it suggests something that someone who is following their self-interest should do.
Perhaps you are claiming that people are somehow irrational if they don’t do whatever is in their best self-interest. However, this seems to be a very dubious claim. It would require the hidden premise that it is irrational to have something other than self-interest as your goal. Here, by self-interest I of course don’t mean the same thing as “utility function”! If you value the well-being of others just as much as your own well-being, you may act in ways that predictably make you worse off, and yet this would in some situations be rational conditional on an altruistic goal. I don’t think we can talk about rational/irrational goals; something can only be rational/irrational according to a stated goal.
(Or well, we could talk about it, but then we’d be using “rational” in a different way than I’m using it now, and also in a different way than is common on LW, and in such a case, I suspect we’d end up arguing whether a tree falling in a forest really makes a sound.
The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
This makes specific what part of “acting in your rational self-interest” means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it’s adding the claim that if you’ve decided to follow your rational self-interest, this is how you should act.
What do you mean by “morality”?
What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don’t have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing—it’s sort of like agreeing that the moon exists, even if they don’t agree what it’s made of. Morality is not synonymous with “caring about the interests of others”, nor does it even necessarily imply that (in the ethical-theory-neutral view I’m taking in this paragraph). Morality is what you should do, even if you think you should do something else.
As for your second-to-last paragraph (the one not in parentheses) -
Being an ethical egoist, I do think that people are irrational if they don’t act in their self-interest. I agree that we can’t have irrational goals, but we aren’t free to set whatever goals we want—due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it’s still the only rational goal. I can go into it more, but I think it’s outside the scope of this thread.
Here’s an argument for something that might be called speciesism. though it isn’t strictly speciesism because moral consideration could be extended to hypothetical non-human beings (though no currently known ones) and not quite to all humans—contractarianism. We have reason to restrict ourselves in our dealings with a being when it fulfills three criteria: it can harm us, it can choose not to harm us, and it can agree not to harm us in exchange for us not harming it. When these criteria are fulfilled, a being has rights and should not be harmed, but otherwise, we have no reason to restrict ourselves in our dealings with it.
Indeed, consistently applied, this view would deny rights to both non-human animals and some human individuals, so it wouldn’t be speciesist. There is however another problem with contractarianism: I think the way it is usually presented is blatantly not thought through and non sequitur.
What do you mean by “we have reason”? If you mean that it would be in our rational self-interest to grant rights to all such beings, then that does not follow. Just because a being could reciprocate doesn’t mean it will, so granting rights to all such beings might well, in some empirical circumstances, go against your rational self-interest. So there seems to be a (crucial!) step missing here. And if all one is arguing for is to “do whatever is in your rational self-interest”, why give it a misleading name like contractarianism?
There is always the option to say “I don’t care about others”. Apart from the ingenuous argument about personal identity that implies that your own future selves should also classify among “others”, there is not much one can say to such a person. Such a person would refuse to act along with the outcome specified by the axiome of impartiality/altruism in the ethics game. You may play the ethics game intellectually and come to the conclusion that systematized altruism implies some variety of utilitarianism (and then define more terms and hash out details), but you can still choose to implement another utility function in your own actions. The two dimensions are separate, I think.
True, but it would be in their rational self-interest to retaliate if their rights aren’t being respected, to create a credible threat so their rights would be respected.
It’s not a misleading name, it means that morality is based on contracts. It’s more specific than “do whatever in your rational self-interest”, as it suggests something that someone who is following their self-interest should do. Also, not everyone who advocates following one’s rational self-interest is a contractarian.
You’d need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level… But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have “rights”.
There are two plausible cases I can see here:
1) The suggestions collides with “do whatever is in your rational self-interest”; in which case it was misleading.
2) The suggestions deductively follows from “do whatever is in your rational self-interest”; in which case it is uninteresting (and misleading because it dresses up as some fancy claim).
You seem to mean:
3) The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
What do you mean by “morality”? Unless you rigorously define such controversial and differently used terms at every step, you’re likely to get caught up in equivocations.
Here are two plausible interpretations for “morality” in the partial sentence I quoted I can come up with:
1) people’s desire to (sometimes) care about the interests of others / them following that desire
2) people’s (system two) reasoning for why they end up doing nice/fair things to others
Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have “moral biases”, e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality.
In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of “morality” according to the above.
So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier:
Perhaps you are claiming that people are somehow irrational if they don’t do whatever is in their best self-interest. However, this seems to be a very dubious claim. It would require the hidden premise that it is irrational to have something other than self-interest as your goal. Here, by self-interest I of course don’t mean the same thing as “utility function”! If you value the well-being of others just as much as your own well-being, you may act in ways that predictably make you worse off, and yet this would in some situations be rational conditional on an altruistic goal. I don’t think we can talk about rational/irrational goals; something can only be rational/irrational according to a stated goal.
(Or well, we could talk about it, but then we’d be using “rational” in a different way than I’m using it now, and also in a different way than is common on LW, and in such a case, I suspect we’d end up arguing whether a tree falling in a forest really makes a sound.
This makes specific what part of “acting in your rational self-interest” means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it’s adding the claim that if you’ve decided to follow your rational self-interest, this is how you should act.
What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don’t have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing—it’s sort of like agreeing that the moon exists, even if they don’t agree what it’s made of. Morality is not synonymous with “caring about the interests of others”, nor does it even necessarily imply that (in the ethical-theory-neutral view I’m taking in this paragraph). Morality is what you should do, even if you think you should do something else.
As for your second-to-last paragraph (the one not in parentheses) -
Being an ethical egoist, I do think that people are irrational if they don’t act in their self-interest. I agree that we can’t have irrational goals, but we aren’t free to set whatever goals we want—due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it’s still the only rational goal. I can go into it more, but I think it’s outside the scope of this thread.