Even if all chess players wanted was to win, it would still be incorrect for them to claim that playing poorly is the correct way to play. Just like when I’m hungry, I want to eat, but I don’t claim that strangers should feed me for free.
Incorrect because that’s not what the winning player would prefer. You don’t claim that strangers should feed you because that’s what you prefer. It’s part of your preferences. Some of your preferences can rely on satisfying someone else’s preferences. Such altruistic preferences are still your own preferences. Helping members of your tribe you care about. Cooperating within your tribe, enjoying the evolutionary triggered endorphins.
You’re probably thinking that considering external preferences and incorporating them in your own utility function is a core principle of being “morally right”. Is that so?
So the core disagreement (I think): Take an agent with a given set of preferences. Some of these may include the preferences of others, some may not. On what basis should that agent modify its preferences to include more preferences of others, i.e. to be “more moral”?
Consider the prisoners’ dilemma, as analyzed traditionally. Each prisoner wants the other to cooperate, but neither can claim that the other should cooperate.
So you can imagine yourself in someone else’s position, then say “What B should do from A’s perspective” is different from “What B should do from B’s perspective”. Then you can enter all sorts of game theoretic considerations. Where does morality come in?
So you can imagine yourself in someone else’s position, then say “What B should do from A’s perspective” is different from “What B should do from B’s perspective”. Then you can enter all sorts of game theoretic considerations. Where does morality come in?
There is no “What B should do from A’s perspective”, from A’s perspective there is only “What I want B to do”. It’s not a “should”. Similarly, the chess player wants his opponent to lose, and I want people to feed me, but neither of those are “should”s. “Should”s are only from an agent’s own perspective applied to themselves, or from something simulating that perspective (such as modeling the other player in a game). “What B should do from B’s perspective” is equivalent to “What B should do”.
Incorrect because that’s not what the winning player would prefer. You don’t claim that strangers should feed you because that’s what you prefer. It’s part of your preferences. Some of your preferences can rely on satisfying someone else’s preferences. Such altruistic preferences are still your own preferences. Helping members of your tribe you care about. Cooperating within your tribe, enjoying the evolutionary triggered endorphins.
You’re probably thinking that considering external preferences and incorporating them in your own utility function is a core principle of being “morally right”. Is that so?
So the core disagreement (I think): Take an agent with a given set of preferences. Some of these may include the preferences of others, some may not. On what basis should that agent modify its preferences to include more preferences of others, i.e. to be “more moral”?
So you can imagine yourself in someone else’s position, then say “What B should do from A’s perspective” is different from “What B should do from B’s perspective”. Then you can enter all sorts of game theoretic considerations. Where does morality come in?
There is no “What B should do from A’s perspective”, from A’s perspective there is only “What I want B to do”. It’s not a “should”. Similarly, the chess player wants his opponent to lose, and I want people to feed me, but neither of those are “should”s. “Should”s are only from an agent’s own perspective applied to themselves, or from something simulating that perspective (such as modeling the other player in a game). “What B should do from B’s perspective” is equivalent to “What B should do”.