I’m starting to doubt I know what ‘sentience’ means, if it means what you’re using it to mean. Your argument doesn’t really work given my definition of sentience, since I don’t equate sentience and subjective experience.
I don’t see why sentience is automatically given such important status in moral reasoning. I’d be more tempted to heavily weight things like consciousness/sapience. Are there good non-obvious arguments for privileging sentience somewhere? Feeling pain and pleasure is a big part of what we consider good and bad, but there are many valuable things that fall outside the scope of ‘sentience’ in my current worldview.
Something like what you’re arguing seems correct, but I think the point is obvious-seeming to good reductionists and we should come up with better arguments for this obvious-seeming thing.
Here is my modest suggestion. Naturalistic ethics. A rational agent has moral significance if it (or its coalition) engages in Nash bargaining with you (or your coalition). That is, you should make nice to it, only if it rewards its benefactors, punishes its malefactors, and gives strangers the benefit of the doubt. The amount of good you should do for your coalition members ought to balance (at the margin) the good they do for you. Your coalition works best on the basis of complete honesty.
There are no other moral principles beyond (long term) rational self interest.
One nice thing about this approach to morality is that it is sufficiently well defined to allow you to prove things. The other nice thing is that it does not require additional arguments and principles to
explain why you ought to act morally
explain how to balance morality with self-interest
Somehow, I find it hard to imagine a principle or argument which would explain why I ought to be particularly nice to folks with lots of sentience—people who can receive more qualia per second than ordinary folks. However, being particularly nice to people who have the power to help or harm me and my friends—well, that is just common sense.
If two people are acting rationally, and they have the same information, they should come to the same conclusion. As such, it would seem reasonable that they’d all pick the same self-interest to act in.
I assume you are joking. But, to respond seriously:
No two people have the same information. I knew I have a stomach ache before I shared that information with you.
Even if they had the same information, that doesn’t necessarily mean they have the same degrees of belief because they may have different priors. Many of my priors have a genetic basis (fear of snakes, for example) and you and I have different genes.
Even if they have the same beliefs they don’t necessarily have the same preferences.
And even if they have the same preferences and the same inclination to act based on self-interest, they have different selves. If they have one stick of bubble-gum between them, and flip a coin to see who gets it, only one of them ends up with the disutility of gum on his face.
They don’t start with the same priors or preferences, but unless they have a good reason to trust their own more than the other person’s, they’d use both.
For example, imagine you were in some treasure hunt. Everyone has a different, imperfect map. This could result in you all looking in different places, but if you stop and think about it (and it’s not a competition) you’ll just share maps. The fact that one map happened to be in your hands doesn’t make it better.
I’m starting to doubt I know what ‘sentience’ means, if it means what you’re using it to mean. Your argument doesn’t really work given my definition of sentience, since I don’t equate sentience and subjective experience.
I don’t see why sentience is automatically given such important status in moral reasoning. I’d be more tempted to heavily weight things like consciousness/sapience. Are there good non-obvious arguments for privileging sentience somewhere? Feeling pain and pleasure is a big part of what we consider good and bad, but there are many valuable things that fall outside the scope of ‘sentience’ in my current worldview.
Something like what you’re arguing seems correct, but I think the point is obvious-seeming to good reductionists and we should come up with better arguments for this obvious-seeming thing.
Here is my modest suggestion. Naturalistic ethics. A rational agent has moral significance if it (or its coalition) engages in Nash bargaining with you (or your coalition). That is, you should make nice to it, only if it rewards its benefactors, punishes its malefactors, and gives strangers the benefit of the doubt. The amount of good you should do for your coalition members ought to balance (at the margin) the good they do for you. Your coalition works best on the basis of complete honesty.
There are no other moral principles beyond (long term) rational self interest.
One nice thing about this approach to morality is that it is sufficiently well defined to allow you to prove things. The other nice thing is that it does not require additional arguments and principles to
explain why you ought to act morally
explain how to balance morality with self-interest
Somehow, I find it hard to imagine a principle or argument which would explain why I ought to be particularly nice to folks with lots of sentience—people who can receive more qualia per second than ordinary folks. However, being particularly nice to people who have the power to help or harm me and my friends—well, that is just common sense.
How do you choose who’s self-interest to act in?
If two people are acting rationally, and they have the same information, they should come to the same conclusion. As such, it would seem reasonable that they’d all pick the same self-interest to act in.
I assume you are joking. But, to respond seriously:
No two people have the same information. I knew I have a stomach ache before I shared that information with you.
Even if they had the same information, that doesn’t necessarily mean they have the same degrees of belief because they may have different priors. Many of my priors have a genetic basis (fear of snakes, for example) and you and I have different genes.
Even if they have the same beliefs they don’t necessarily have the same preferences.
And even if they have the same preferences and the same inclination to act based on self-interest, they have different selves. If they have one stick of bubble-gum between them, and flip a coin to see who gets it, only one of them ends up with the disutility of gum on his face.
They don’t start with the same priors or preferences, but unless they have a good reason to trust their own more than the other person’s, they’d use both.
For example, imagine you were in some treasure hunt. Everyone has a different, imperfect map. This could result in you all looking in different places, but if you stop and think about it (and it’s not a competition) you’ll just share maps. The fact that one map happened to be in your hands doesn’t make it better.