Don’t forget that we don’t have access to each other’s utility functions. I’ve started to think of “X, Y” as an inferior way to express A’s and B’s utility—it’s better to think of them as incomparable dimensions—“X + Yi”. Your description of altruistic players is therefore incorrect—each player has a function where they value the other’s (perceived) reward. Instead of a selfish 2,2 becoming an altruistic 4,4, it’s a selfish 2+2i becoming an altruistic 2+Afn(2i)+2i+Bfn(2), where Afn and Bfn output i*some conversion). One problem in this is that MANY (perhaps all) humans use non-great functions for this altruistic conversion. It’s easy to mix up “what rewards B” with “what I imagine I’d want if I were B” or “what B should want”.
I think further that you’re mixing up utility with resources (or components of the world-state that affect utility, but aren’t actually utility). The 11+10i meat/no-meat result is utility ONLY if it includes all of the other crap that goes on in our minds to evaluate world-states, including “do they really love me, why don’t I have power in this relationship, etc.” If it’s just a component, then there’s no reason to believe it’s linear, nor that it’s independent of all the other things.
I do like some of the exploration of convention, and the recognition that it’s not universal. I don’t think simplistic game theory adds much to that theory, though. And I suspect you’re massively underestimating the cost of creating, understanding, and negotiating within those conventions.
Your description of altruistic players is therefore incorrect
Yeah, it’s a very simplified version of altruism. I understand that real altruism in the wild does not look like adding another’s utility function to one’s own (which one somehow has read access to).
The 11+10i meat/no-meat result is utility ONLY if it includes all of the other crap that goes on in our minds to evaluate world-states
No I’m saying: assume that the payoff matrix I’ve written down is correct for the situation in which some outside entity chooses each player’s actions for them. This whole post is basically about how the “other crap” changes the payoff matrix when they players have to take responsibility for picking the actions themselves.
If it’s just a component, then there’s no reason to believe it’s linear, nor that it’s independent of all the other things.
I know what linearity and statistical independence are, but I don’t know what you mean by this.
Don’t forget that we don’t have access to each other’s utility functions. I’ve started to think of “X, Y” as an inferior way to express A’s and B’s utility—it’s better to think of them as incomparable dimensions—“X + Yi”. Your description of altruistic players is therefore incorrect—each player has a function where they value the other’s (perceived) reward. Instead of a selfish 2,2 becoming an altruistic 4,4, it’s a selfish 2+2i becoming an altruistic 2+Afn(2i)+2i+Bfn(2), where Afn and Bfn output i*some conversion). One problem in this is that MANY (perhaps all) humans use non-great functions for this altruistic conversion. It’s easy to mix up “what rewards B” with “what I imagine I’d want if I were B” or “what B should want”.
I think further that you’re mixing up utility with resources (or components of the world-state that affect utility, but aren’t actually utility). The 11+10i meat/no-meat result is utility ONLY if it includes all of the other crap that goes on in our minds to evaluate world-states, including “do they really love me, why don’t I have power in this relationship, etc.” If it’s just a component, then there’s no reason to believe it’s linear, nor that it’s independent of all the other things.
I do like some of the exploration of convention, and the recognition that it’s not universal. I don’t think simplistic game theory adds much to that theory, though. And I suspect you’re massively underestimating the cost of creating, understanding, and negotiating within those conventions.
Yeah, it’s a very simplified version of altruism. I understand that real altruism in the wild does not look like adding another’s utility function to one’s own (which one somehow has read access to).
No I’m saying: assume that the payoff matrix I’ve written down is correct for the situation in which some outside entity chooses each player’s actions for them. This whole post is basically about how the “other crap” changes the payoff matrix when they players have to take responsibility for picking the actions themselves.
I know what linearity and statistical independence are, but I don’t know what you mean by this.