Your post seems to point out that one can consider mixed coordinated strategies on the global game (where in first round you are told which game you play, and in the second round you play it), with the set of payoffs thus obtained as the convex closure of pure strategy payoffs, in particular payoffs on Pareto frontier of the global game being representable as linear (convex) combination of payoffs on Pareto frontiers of individual games, and in an even more special case, this point applies to any notion of “fair” solution.
The philosophical point seems to be the same as in Counterfactual Mugging: you might want to always follow a strategy you’d (want to) choose before obtaining the knowledge you now possess (with that strategy itself being conditional, and to be used by passing the knowledge you now possess as parameter), in this case applied to knowledge about which game is being played. In other words, try respecting reflective consistency even if “it’s already too late”.
P.S.
In general the μ is not a real number, but a linear isomorphism between the two utilities, invariantly defined by some process.
“Isomorphism” (and “between”) seems like a very wrong word to use
here. Linear combination of two utilities, perhaps.
In general the μ is not a real number, but a linear isomorphism between the two utilities, invariantly defined by some process.
“Isomorphism” (and “between”) seems like a very wrong word to use here. Linear combination of two utilities, perhaps.
I suspect you misunderstand. The two isomorphic utilities (i.e. utility functions) are U2 and μU2. You seem to be referring to the linear combination of U1 and U2.
Yes, that’s what Perplexed noticed. What seems interesting is that getting a Pareto optimal result in the GG forces both players to follow Counterfactual Mugging style reasoning.
I’ve added an addendum to the post, laying out what μ actually is.
“Isomorphism” (and “between”) seems like a very wrong word to use here. Linear combination of two utilities, perhaps.
I feel like mu isn’t the really important part… it’s more like mu = A x B where A encodes the translation from “one util” in U1 to “one util” in U2 and B encodes the relative amounts the agents matter in the deal. It seems like A is the bit that remains fairly constant across deals and short periods of time, while B can be differently bargained each time relative to the specific deal in question.
If B differes over time, you’ll have outcomes that are not Pareto optimal in total. Idealised utility maximising agents should establish μ once and for all at the beggining; each change in μ is paid for in decreased utility.
I’m not claiming that human agents do, or should, behave this way.
Your post seems to point out that one can consider mixed coordinated strategies on the global game (where in first round you are told which game you play, and in the second round you play it), with the set of payoffs thus obtained as the convex closure of pure strategy payoffs, in particular payoffs on Pareto frontier of the global game being representable as linear (convex) combination of payoffs on Pareto frontiers of individual games, and in an even more special case, this point applies to any notion of “fair” solution.
The philosophical point seems to be the same as in Counterfactual Mugging: you might want to always follow a strategy you’d (want to) choose before obtaining the knowledge you now possess (with that strategy itself being conditional, and to be used by passing the knowledge you now possess as parameter), in this case applied to knowledge about which game is being played. In other words, try respecting reflective consistency even if “it’s already too late”.
P.S.
“Isomorphism” (and “between”) seems like a very wrong word to use here. Linear combination of two utilities, perhaps.
I suspect you misunderstand. The two isomorphic utilities (i.e. utility functions) are U2 and μU2. You seem to be referring to the linear combination of U1 and U2.
I’ve added an addendum to the post, laying out what μ actually is.
Though the whole addendum could be summarised as: yes, μ is pretty much what you’d expect. :-)
Yes, that’s what Perplexed noticed. What seems interesting is that getting a Pareto optimal result in the GG forces both players to follow Counterfactual Mugging style reasoning.
I’ve added an addendum to the post, laying out what μ actually is.
I feel like mu isn’t the really important part… it’s more like mu = A x B where A encodes the translation from “one util” in U1 to “one util” in U2 and B encodes the relative amounts the agents matter in the deal. It seems like A is the bit that remains fairly constant across deals and short periods of time, while B can be differently bargained each time relative to the specific deal in question.
If B differes over time, you’ll have outcomes that are not Pareto optimal in total. Idealised utility maximising agents should establish μ once and for all at the beggining; each change in μ is paid for in decreased utility.
I’m not claiming that human agents do, or should, behave this way.