You’d need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level… But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have “rights”.
It’s more specific than “do whatever in your rational self-interest”, as it suggests something that someone who is following their self-interest should do.
There are two plausible cases I can see here:
1) The suggestions collides with “do whatever is in your rational self-interest”; in which case it was misleading.
2) The suggestions deductively follows from “do whatever is in your rational self-interest”; in which case it is uninteresting (and misleading because it dresses up as some fancy claim).
You seem to mean:
3) The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
it means that morality is based on contracts.
What do you mean by “morality”? Unless you rigorously define such controversial and differently used terms at every step, you’re likely to get caught up in equivocations.
Here are two plausible interpretations for “morality” in the partial sentence I quoted I can come up with:
1) people’s desire to (sometimes) care about the interests of others / them following that desire
2) people’s (system two) reasoning for why they end up doing nice/fair things to others
Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have “moral biases”, e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality.
In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of “morality” according to the above.
So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier:
as it suggests something that someone who is following their self-interest should do.
Perhaps you are claiming that people are somehow irrational if they don’t do whatever is in their best self-interest. However, this seems to be a very dubious claim. It would require the hidden premise that it is irrational to have something other than self-interest as your goal. Here, by self-interest I of course don’t mean the same thing as “utility function”! If you value the well-being of others just as much as your own well-being, you may act in ways that predictably make you worse off, and yet this would in some situations be rational conditional on an altruistic goal. I don’t think we can talk about rational/irrational goals; something can only be rational/irrational according to a stated goal.
(Or well, we could talk about it, but then we’d be using “rational” in a different way than I’m using it now, and also in a different way than is common on LW, and in such a case, I suspect we’d end up arguing whether a tree falling in a forest really makes a sound.
The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
This makes specific what part of “acting in your rational self-interest” means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it’s adding the claim that if you’ve decided to follow your rational self-interest, this is how you should act.
What do you mean by “morality”?
What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don’t have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing—it’s sort of like agreeing that the moon exists, even if they don’t agree what it’s made of. Morality is not synonymous with “caring about the interests of others”, nor does it even necessarily imply that (in the ethical-theory-neutral view I’m taking in this paragraph). Morality is what you should do, even if you think you should do something else.
As for your second-to-last paragraph (the one not in parentheses) -
Being an ethical egoist, I do think that people are irrational if they don’t act in their self-interest. I agree that we can’t have irrational goals, but we aren’t free to set whatever goals we want—due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it’s still the only rational goal. I can go into it more, but I think it’s outside the scope of this thread.
You’d need something like timeless decision theory here, and I feel like it is somehow cheating to bring in TDT/UDT when it comes to moral reasoning at the normative level… But I see what you mean. I am however not sure whether the view you defend here would on its own terms imply that humans have “rights”.
There are two plausible cases I can see here:
1) The suggestions collides with “do whatever is in your rational self-interest”; in which case it was misleading.
2) The suggestions deductively follows from “do whatever is in your rational self-interest”; in which case it is uninteresting (and misleading because it dresses up as some fancy claim).
You seem to mean:
3) The suggestions adds something of interest to “do whatever is in your rational self-interest”; here I don’t see where this further claim would/could come from.
What do you mean by “morality”? Unless you rigorously define such controversial and differently used terms at every step, you’re likely to get caught up in equivocations.
Here are two plausible interpretations for “morality” in the partial sentence I quoted I can come up with:
1) people’s desire to (sometimes) care about the interests of others / them following that desire
2) people’s (system two) reasoning for why they end up doing nice/fair things to others
Both these claims are descriptive. It would be like justifying deontology by citing the findings from trolleyology, which would beg the question as to whether humans may have “moral biases”, e.g. whether they are rationalising over inconsistencies in their positions, or defending positions they would not defend given more information and rationality.
In addition, even if the above sometimes applies, it would of course be overgeneralising to classify all of “morality” according to the above.
So likely you meant something else. There is a third plausible interpretation of your claim, namely something resembling what you wrote earlier:
Perhaps you are claiming that people are somehow irrational if they don’t do whatever is in their best self-interest. However, this seems to be a very dubious claim. It would require the hidden premise that it is irrational to have something other than self-interest as your goal. Here, by self-interest I of course don’t mean the same thing as “utility function”! If you value the well-being of others just as much as your own well-being, you may act in ways that predictably make you worse off, and yet this would in some situations be rational conditional on an altruistic goal. I don’t think we can talk about rational/irrational goals; something can only be rational/irrational according to a stated goal.
(Or well, we could talk about it, but then we’d be using “rational” in a different way than I’m using it now, and also in a different way than is common on LW, and in such a case, I suspect we’d end up arguing whether a tree falling in a forest really makes a sound.
This makes specific what part of “acting in your rational self-interest” means. To use an admittedly imperfect analogy, the connection between egoism and contractarianism is a bit like the connection between utilitarianism and giving to charity (conditional on it being effective). The former implies the latter, but it takes some thinking to determine what it actually entails. Also, not all egoists are contractarians, and it’s adding the claim that if you’ve decided to follow your rational self-interest, this is how you should act.
What one should do. I realize that this may be an imprecise definition, but it gets at what utilitarians, Kantians, Divine Command Theorists, and ethical egoists have in common with each other that they don’t have in common with moral non-realists, such as nihilists. Of course, all the ethical theories disagree about the content of morality, but they agree that there is such a thing—it’s sort of like agreeing that the moon exists, even if they don’t agree what it’s made of. Morality is not synonymous with “caring about the interests of others”, nor does it even necessarily imply that (in the ethical-theory-neutral view I’m taking in this paragraph). Morality is what you should do, even if you think you should do something else.
As for your second-to-last paragraph (the one not in parentheses) -
Being an ethical egoist, I do think that people are irrational if they don’t act in their self-interest. I agree that we can’t have irrational goals, but we aren’t free to set whatever goals we want—due to the nature of subjective experience and self-interest, rational self-interest is the only rational goal. What rational self-interest entails varies from person to person, but it’s still the only rational goal. I can go into it more, but I think it’s outside the scope of this thread.