This isn’t about the agents having selfish desires (in fact, they don’t even have to “not care at all about other entities”—altruism determines what the utility function is, not how to maximise it.)
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent. Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
You can do similar mathematical analyses with altruistic agents, but at that point speaking strictly you’re doing decision-theoretic calculations or possibly utilitarian calculations not game-theoretic calculations.
Utilitarian ethics, game theory and decision theory are three different things, and it seems to me your criticism assumes that statements about game theory should be taken as statements about utilitarian ethics or statements about decision theory. I think that is an instance of the fallacy of composition and we’re better served to stay very aware of the distinctions between those three frameworks.
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent.
No, it just isn’t. Game theory is completely agnostic about what the preferences of the players are based on. Game theory takes a payoff matrix and calculates things like Nash Equilibrium and Dominant Strategies. The verbal description of why the payoff matrix happens to be as it is is fluff.
Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
As soon as you allow altruistic interests the game ceases to be a Prisoner’s Dilemma. The dilemma (and game theory in general) relies on the players being perfectly selfish in the sense that they ruthlessly maximise their own payoffs as they are defined, not in the sense that those payoffs must never refer to aspects of the universe that happen to include the physical state of the other agents.
Consider the Codependent Prisoner’s Dilemma. Romeo and Juliet have been captured and the guards are trying to extort confessions out of them. However Romeo and Juliet are both lovesick and infatuated and care only about what happens to their lover, not what happens to themselves. Naturally the guards offer Romeo the deal “If you confess we’ll let Juliet go and you’ll get 10 years but if you don’t confess you’ll both get 1 year” (and vice versa, with a both confess clause in there somewhere). Game theory is perfectly equipped at handling this game. In fact, so much so that it wouldn’t even bother calling it a new name. It’s just a Prisoner’s Dilemma and the fact that the conflict of interests between Romeo and Juliet happens to be based on codependent altruism rather than narcissism is outside the scope of what game theorists care about.
It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don’t personally think that deserves upvotes, but others obviously differ.
Your original claim was that:
Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game-theory-rational agent or that you should act as if others will do so, it can’t be saying anything instrumentally bad either.
I just don’t see how it could even be possible for game theory to do what you claim it does. That would be like stating that a document describing the rules of poker was instrumentally and normatively bad because it encouraged wasteful, zero-sum gaming. It would be mistaking description for prescription.
We have already agreed, I think, that there is nothing epistemically bad about game theory taken as it is.
Everything below responds to the off-track discussion above and can be safely ignored by posters not specifically interested in that digression.
In game theory each player’s payoff matrix is their own. Notice that Codependent Romeo does not care where Codependent Juliet ends up in her payoff matrix. If Codependent Romeo was altruistic in the sense of wanting to maximise Juliet’s satisfaction with her payoff, he’d be keeping silent. Because Codependent Romeo is game-theory-rational, he’s indifferent to Codependent Juliet’s satisfaction with her outcome and only cares about maximising his personal payoff.
The standard assumption in a game-theoretic analysis is that a poker player wants money, a chess player wants to win chess games and so on, and that they are indifferent to their opponent(s) opinion about the outcome, just as Codependent Romeo is maximising his own payoff matrix and is indifferent to Codependent Juliet’s.
That is what we attempt to convey when we tell people that game-theory-rational players are neither benevolent nor malevolent. Even if you incorporate something you want to call “altruism” into their preference order, they still don’t care directly about where anyone else ends up in those other peoples’ preference orders.
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad.
Not true. The word ‘connotations’ comes to mind. As does “reframing to the extent of outright redefining a critical keyword”. That is not a normatively neutral act. It is legitimate for me to judge it and I choose to do so—negatively.
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent. Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
You can do similar mathematical analyses with altruistic agents, but at that point speaking strictly you’re doing decision-theoretic calculations or possibly utilitarian calculations not game-theoretic calculations.
Utilitarian ethics, game theory and decision theory are three different things, and it seems to me your criticism assumes that statements about game theory should be taken as statements about utilitarian ethics or statements about decision theory. I think that is an instance of the fallacy of composition and we’re better served to stay very aware of the distinctions between those three frameworks.
No, it just isn’t. Game theory is completely agnostic about what the preferences of the players are based on. Game theory takes a payoff matrix and calculates things like Nash Equilibrium and Dominant Strategies. The verbal description of why the payoff matrix happens to be as it is is fluff.
As soon as you allow altruistic interests the game ceases to be a Prisoner’s Dilemma. The dilemma (and game theory in general) relies on the players being perfectly selfish in the sense that they ruthlessly maximise their own payoffs as they are defined, not in the sense that those payoffs must never refer to aspects of the universe that happen to include the physical state of the other agents.
Consider the Codependent Prisoner’s Dilemma. Romeo and Juliet have been captured and the guards are trying to extort confessions out of them. However Romeo and Juliet are both lovesick and infatuated and care only about what happens to their lover, not what happens to themselves. Naturally the guards offer Romeo the deal “If you confess we’ll let Juliet go and you’ll get 10 years but if you don’t confess you’ll both get 1 year” (and vice versa, with a both confess clause in there somewhere). Game theory is perfectly equipped at handling this game. In fact, so much so that it wouldn’t even bother calling it a new name. It’s just a Prisoner’s Dilemma and the fact that the conflict of interests between Romeo and Juliet happens to be based on codependent altruism rather than narcissism is outside the scope of what game theorists care about.
It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don’t personally think that deserves upvotes, but others obviously differ.
Your original claim was that:
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game-theory-rational agent or that you should act as if others will do so, it can’t be saying anything instrumentally bad either.
I just don’t see how it could even be possible for game theory to do what you claim it does. That would be like stating that a document describing the rules of poker was instrumentally and normatively bad because it encouraged wasteful, zero-sum gaming. It would be mistaking description for prescription.
We have already agreed, I think, that there is nothing epistemically bad about game theory taken as it is.
Everything below responds to the off-track discussion above and can be safely ignored by posters not specifically interested in that digression.
In game theory each player’s payoff matrix is their own. Notice that Codependent Romeo does not care where Codependent Juliet ends up in her payoff matrix. If Codependent Romeo was altruistic in the sense of wanting to maximise Juliet’s satisfaction with her payoff, he’d be keeping silent. Because Codependent Romeo is game-theory-rational, he’s indifferent to Codependent Juliet’s satisfaction with her outcome and only cares about maximising his personal payoff.
The standard assumption in a game-theoretic analysis is that a poker player wants money, a chess player wants to win chess games and so on, and that they are indifferent to their opponent(s) opinion about the outcome, just as Codependent Romeo is maximising his own payoff matrix and is indifferent to Codependent Juliet’s.
That is what we attempt to convey when we tell people that game-theory-rational players are neither benevolent nor malevolent. Even if you incorporate something you want to call “altruism” into their preference order, they still don’t care directly about where anyone else ends up in those other peoples’ preference orders.
Not true. The word ‘connotations’ comes to mind. As does “reframing to the extent of outright redefining a critical keyword”. That is not a normatively neutral act. It is legitimate for me to judge it and I choose to do so—negatively.