Good introduction, nothing new (but the “line-item veto” example is a nice one), but still good to have around.
One thing that made me “wail and gnash” a little :
that everyone is purely self-interested
That’s not really the case. Game theory usually consider that everyone is an utility maximizer, but nothing says that the utility function has to be selfish. Utility function can factor well-being and happiness of others in it.
You can apply game theory in cases like a parent-child relation, in which the parent and the child disagree, but the parent still is motivated with the interest of the child. Even in more classical cases, nothing forces the utility function to be selfish and not value the other well-being. Game theory only apply when the agents have different goals, but it can just be “I value my own well-being twice as much as the well-being of the other”, which is not “purely self-interested”.
It makes me “wail and gnash” because it’s a very frequent cliché that rationalists and utility maximizer are necessarily selfish and don’t care about others, and it’s a cliché we should fight. That say, I understand the a whole part of game theory is about showing how even pure selfishness can, in some cases, lead to cooperation being the best solution. Your “line-item veto” is a good example of it, Clinton and Congress can still cooperate to get the previous 5-5 equilibrium, if they trust each others, and this is an IPD at the end.
Game theory only apply when the agents have different goals
That is not quite true. It can also apply when they have identical goals but different information, for example, the Meet in New York game that is discussed in the next post. They should still end up at a Nash Equilibrium, and depending on the specifics of a cooperative game, backwards induction may be applicable.
See my response to steven0461 and my footnote. Yes, we will eventually be able to derive cooperation, but we will derive it by starting with selfish assumptions.
I don’t think the math models motivation anyway. It’s abstracted away, leaving each agent maximising a utility function. Neither is utility in the model (which is well defined) isomorphic to utility for a person making decisions in the real world (which is not). But our minds seem to learn things better when they are couched in terms of a story about people.
Hmm. Possibly one danger in this is assuming that your own internal story about what the equations mean is what they actually mean, such that you end up overconfident that the results of a decision in the real world will be like the story in your head.
Good introduction, nothing new (but the “line-item veto” example is a nice one), but still good to have around.
One thing that made me “wail and gnash” a little :
That’s not really the case. Game theory usually consider that everyone is an utility maximizer, but nothing says that the utility function has to be selfish. Utility function can factor well-being and happiness of others in it.
You can apply game theory in cases like a parent-child relation, in which the parent and the child disagree, but the parent still is motivated with the interest of the child. Even in more classical cases, nothing forces the utility function to be selfish and not value the other well-being. Game theory only apply when the agents have different goals, but it can just be “I value my own well-being twice as much as the well-being of the other”, which is not “purely self-interested”.
It makes me “wail and gnash” because it’s a very frequent cliché that rationalists and utility maximizer are necessarily selfish and don’t care about others, and it’s a cliché we should fight. That say, I understand the a whole part of game theory is about showing how even pure selfishness can, in some cases, lead to cooperation being the best solution. Your “line-item veto” is a good example of it, Clinton and Congress can still cooperate to get the previous 5-5 equilibrium, if they trust each others, and this is an IPD at the end.
That is not quite true. It can also apply when they have identical goals but different information, for example, the Meet in New York game that is discussed in the next post. They should still end up at a Nash Equilibrium, and depending on the specifics of a cooperative game, backwards induction may be applicable.
See my response to steven0461 and my footnote. Yes, we will eventually be able to derive cooperation, but we will derive it by starting with selfish assumptions.
I don’t think the math models motivation anyway. It’s abstracted away, leaving each agent maximising a utility function. Neither is utility in the model (which is well defined) isomorphic to utility for a person making decisions in the real world (which is not). But our minds seem to learn things better when they are couched in terms of a story about people.
Hmm. Possibly one danger in this is assuming that your own internal story about what the equations mean is what they actually mean, such that you end up overconfident that the results of a decision in the real world will be like the story in your head.