Oh, this also suggests a way in which the utility function abstraction is leaky, because the reasons for the payoffs in a game may matter. E.g. if one payoff is high because the corresponding agent is altruistic, then in some sense that agent is “already cooperating” in a way which is baked into the game, and so the rational thing for them to do might be different from the rational thing for another agent who gets the same payoffs, but for “selfish” reasons.
Maybe FDT already lumps this effect into the “how correlated are decisions” bucket? Idk.
Oh, this also suggests a way in which the utility function abstraction is leaky, because the reasons for the payoffs in a game may matter. E.g. if one payoff is high because the corresponding agent is altruistic, then in some sense that agent is “already cooperating” in a way which is baked into the game, and so the rational thing for them to do might be different from the rational thing for another agent who gets the same payoffs, but for “selfish” reasons.
Maybe FDT already lumps this effect into the “how correlated are decisions” bucket? Idk.