This seems to provide an answer to the question you posed above.
What other principle can you use to draw this line between creatures who count and those who don’t?
Chickens have very little to offer me other than their tasty flesh and essentially no capacity to meaningfully threaten me which is why I don’t take their preferences into account. If you’re happy with lopsided deals then there’s how you draw the line.
This seems like a perfectly reasonable position to take but it doesn’t sound anything like utilitarianism to me.
Well the weighting is really the crux of the issue. If you are proposing that weighting should reflect both what the affected parties can offer and what they can credibly threaten then I still don’t think this sounds much like utilitarianism as usually defined. It sounds more like realpolitik / might-is-right.
Turns out, the best deals look a lot like maximizing weighted averages of the utilities of affected parties.
I disagree. Certainly there are examples where the best deals do not look like maximizing weighted averages of the utilities of affected parties, and I gave one here. Are you aware of some argument that these kinds of situations are not likely in real life?
Ok, I didn’t realize that you would weigh others’ preferences by how much they can offer you. My followup question is, you seem willing to give weight to other people’s preferences unilaterally, without requiring that they do the same for you, which is again more like altruism than cooperation. (For example you don’t want to ignore animals, but they can’t really reciprocate your attempt at cooperation.) Is that also a misunderstanding on my part?
But post-FAI, how does anyone except the FAI have anything to offer? Neither anything to offer, nor anything to threaten with. The FAI decides all, does all, rules all. The question is, how should it rule? Since no creature besides the FAI has anything to offer, weighting is out of the equation, and every present, past, and potential creature’s utilities should count the same.
I think an FAI’s values would reflect the programmers’ values (unless it turns out there is Objective Morality or something else unexpected). My understanding now is that if Robin were the FAI’s programmer, the weights he would give to other people in its utility function would depend on how much they helped him create the FAI (and for people who didn’t help, how much the helpers care about them).
Sounds plenty selfish to me. Indeed, no different than might-is-right.
Instead of might-is-right, I’d summarize it as “might-and-the-ability-to-provide-services-to-others-in-exchange-for-what-you-want-is-right” and Robin would presumably emphasize the second part of that.
You can care a lot about other people no matter how much they help you, but should help those who helps you even more for game-theoretic reasons. This doesn’t at all imply “selfishness”.
Deals can be lopsided. If they have little to offer, they may get little in return.
This seems to provide an answer to the question you posed above.
Chickens have very little to offer me other than their tasty flesh and essentially no capacity to meaningfully threaten me which is why I don’t take their preferences into account. If you’re happy with lopsided deals then there’s how you draw the line.
This seems like a perfectly reasonable position to take but it doesn’t sound anything like utilitarianism to me.
Turns out, the best deals look a lot like maximizing weighted averages of the utilities of affected parties.
Well the weighting is really the crux of the issue. If you are proposing that weighting should reflect both what the affected parties can offer and what they can credibly threaten then I still don’t think this sounds much like utilitarianism as usually defined. It sounds more like realpolitik / might-is-right.
I disagree. Certainly there are examples where the best deals do not look like maximizing weighted averages of the utilities of affected parties, and I gave one here. Are you aware of some argument that these kinds of situations are not likely in real life?
I also agree with mattnewport’s point, BTW.
Ok, I didn’t realize that you would weigh others’ preferences by how much they can offer you. My followup question is, you seem willing to give weight to other people’s preferences unilaterally, without requiring that they do the same for you, which is again more like altruism than cooperation. (For example you don’t want to ignore animals, but they can’t really reciprocate your attempt at cooperation.) Is that also a misunderstanding on my part?
Creatures get weight in a deal both because they have things to offer, and because others who have things to offer care about them.
But post-FAI, how does anyone except the FAI have anything to offer? Neither anything to offer, nor anything to threaten with. The FAI decides all, does all, rules all. The question is, how should it rule? Since no creature besides the FAI has anything to offer, weighting is out of the equation, and every present, past, and potential creature’s utilities should count the same.
I think an FAI’s values would reflect the programmers’ values (unless it turns out there is Objective Morality or something else unexpected). My understanding now is that if Robin were the FAI’s programmer, the weights he would give to other people in its utility function would depend on how much they helped him create the FAI (and for people who didn’t help, how much the helpers care about them).
Sounds plenty selfish to me. Indeed, no different than might-is-right.
Instead of might-is-right, I’d summarize it as “might-and-the-ability-to-provide-services-to-others-in-exchange-for-what-you-want-is-right” and Robin would presumably emphasize the second part of that.
You can care a lot about other people no matter how much they help you, but should help those who helps you even more for game-theoretic reasons. This doesn’t at all imply “selfishness”.