Said literature makes statements about what is game-theory-rational. Those statements are only epistemically, instrumentally or normatively bad if you take them to be statements about what is LW-rational or “rational” in the layperson’s sense.
Disagree on instrumentally and normatively. Agree regarding epistemically—at least when the works are careful with what claims are made. Also disagree with the “game-theory-rational”, although I understand the principle you are trying to get at. A more limited claim needs to be made or more precise terminology.
I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities. There’s no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever. So I can’t see how the game theory literature could be said to give normatively bad advice, unless the speaker misunderstood the definition of rationality being used, and thought that some definition of rationality was being used in which rationality is normative.
I’m not sure what negative epistemic or instrumental outcomes you foresee either, but I’m open to the possibility that there are some.
Is there a term you prefer to “game-theory-rational” that captures the same meaning? As stated above, game theory is the exploration of what happens when entities that are “rational” by that specific definition interact with the world or each other, so it seems like the ideal term to me.
I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities.
Under this definition you can’t claim epistemic accuracy either. In particular the ‘perfectly informed’ assumption when combined with the personal utility maximization leads to different behaviors to those described as ‘rational’. (It needs to be weakened to “perfectly informed about everything except those parts of the universe that are the other agent.)
There’s no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever.
This isn’t about the agents having selfish desires (in fact, they don’t even have to “not care at all about other entities”—altruism determines what the utility function is, not how to maximise it.) No, this is about shoddy claims about decision theory that are either connotatively misleading or erroneous depending on how they are framed. All those poor paperclip maximisers who read such sources and take them at face value will end up producing less paperclips than they could have if they knew the correct way to interact with the staples maximisers in contrived scenarios.
This isn’t about the agents having selfish desires (in fact, they don’t even have to “not care at all about other entities”—altruism determines what the utility function is, not how to maximise it.)
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent. Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
You can do similar mathematical analyses with altruistic agents, but at that point speaking strictly you’re doing decision-theoretic calculations or possibly utilitarian calculations not game-theoretic calculations.
Utilitarian ethics, game theory and decision theory are three different things, and it seems to me your criticism assumes that statements about game theory should be taken as statements about utilitarian ethics or statements about decision theory. I think that is an instance of the fallacy of composition and we’re better served to stay very aware of the distinctions between those three frameworks.
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent.
No, it just isn’t. Game theory is completely agnostic about what the preferences of the players are based on. Game theory takes a payoff matrix and calculates things like Nash Equilibrium and Dominant Strategies. The verbal description of why the payoff matrix happens to be as it is is fluff.
Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
As soon as you allow altruistic interests the game ceases to be a Prisoner’s Dilemma. The dilemma (and game theory in general) relies on the players being perfectly selfish in the sense that they ruthlessly maximise their own payoffs as they are defined, not in the sense that those payoffs must never refer to aspects of the universe that happen to include the physical state of the other agents.
Consider the Codependent Prisoner’s Dilemma. Romeo and Juliet have been captured and the guards are trying to extort confessions out of them. However Romeo and Juliet are both lovesick and infatuated and care only about what happens to their lover, not what happens to themselves. Naturally the guards offer Romeo the deal “If you confess we’ll let Juliet go and you’ll get 10 years but if you don’t confess you’ll both get 1 year” (and vice versa, with a both confess clause in there somewhere). Game theory is perfectly equipped at handling this game. In fact, so much so that it wouldn’t even bother calling it a new name. It’s just a Prisoner’s Dilemma and the fact that the conflict of interests between Romeo and Juliet happens to be based on codependent altruism rather than narcissism is outside the scope of what game theorists care about.
It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don’t personally think that deserves upvotes, but others obviously differ.
Your original claim was that:
Said literature gives advice, reasoning and conclusions that is epistemically, instrumentally and normatively bad.
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game-theory-rational agent or that you should act as if others will do so, it can’t be saying anything instrumentally bad either.
I just don’t see how it could even be possible for game theory to do what you claim it does. That would be like stating that a document describing the rules of poker was instrumentally and normatively bad because it encouraged wasteful, zero-sum gaming. It would be mistaking description for prescription.
We have already agreed, I think, that there is nothing epistemically bad about game theory taken as it is.
Everything below responds to the off-track discussion above and can be safely ignored by posters not specifically interested in that digression.
In game theory each player’s payoff matrix is their own. Notice that Codependent Romeo does not care where Codependent Juliet ends up in her payoff matrix. If Codependent Romeo was altruistic in the sense of wanting to maximise Juliet’s satisfaction with her payoff, he’d be keeping silent. Because Codependent Romeo is game-theory-rational, he’s indifferent to Codependent Juliet’s satisfaction with her outcome and only cares about maximising his personal payoff.
The standard assumption in a game-theoretic analysis is that a poker player wants money, a chess player wants to win chess games and so on, and that they are indifferent to their opponent(s) opinion about the outcome, just as Codependent Romeo is maximising his own payoff matrix and is indifferent to Codependent Juliet’s.
That is what we attempt to convey when we tell people that game-theory-rational players are neither benevolent nor malevolent. Even if you incorporate something you want to call “altruism” into their preference order, they still don’t care directly about where anyone else ends up in those other peoples’ preference orders.
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad.
Not true. The word ‘connotations’ comes to mind. As does “reframing to the extent of outright redefining a critical keyword”. That is not a normatively neutral act. It is legitimate for me to judge it and I choose to do so—negatively.
Disagree on instrumentally and normatively. Agree regarding epistemically—at least when the works are careful with what claims are made. Also disagree with the “game-theory-rational”, although I understand the principle you are trying to get at. A more limited claim needs to be made or more precise terminology.
I would be interested in reading about the bases for your disagreement. Game theory is essentially the exploration of what happens if you postulate entities who are perfectly informed, personal utility-maximisers who do not care at all either way about other entities. There’s no explicit or implicit claim that people ought to behave like those entities, thus no normative content whatsoever. So I can’t see how the game theory literature could be said to give normatively bad advice, unless the speaker misunderstood the definition of rationality being used, and thought that some definition of rationality was being used in which rationality is normative.
I’m not sure what negative epistemic or instrumental outcomes you foresee either, but I’m open to the possibility that there are some.
Is there a term you prefer to “game-theory-rational” that captures the same meaning? As stated above, game theory is the exploration of what happens when entities that are “rational” by that specific definition interact with the world or each other, so it seems like the ideal term to me.
Under this definition you can’t claim epistemic accuracy either. In particular the ‘perfectly informed’ assumption when combined with the personal utility maximization leads to different behaviors to those described as ‘rational’. (It needs to be weakened to “perfectly informed about everything except those parts of the universe that are the other agent.)
This isn’t about the agents having selfish desires (in fact, they don’t even have to “not care at all about other entities”—altruism determines what the utility function is, not how to maximise it.) No, this is about shoddy claims about decision theory that are either connotatively misleading or erroneous depending on how they are framed. All those poor paperclip maximisers who read such sources and take them at face value will end up producing less paperclips than they could have if they knew the correct way to interact with the staples maximisers in contrived scenarios.
This is wrong. The standard assumption is that game-theory-rational entities are neither altruistic nor malevolent. Otherwise the Prisoner’s Dilemma wouldn’t be a dilemma in game theory. It’s only a dilemma as long as both players are solely interested in their own outcomes. As soon as you allow players to have altruistic interests in other players’ outcomes it ceases to be a dilemma.
You can do similar mathematical analyses with altruistic agents, but at that point speaking strictly you’re doing decision-theoretic calculations or possibly utilitarian calculations not game-theoretic calculations.
Utilitarian ethics, game theory and decision theory are three different things, and it seems to me your criticism assumes that statements about game theory should be taken as statements about utilitarian ethics or statements about decision theory. I think that is an instance of the fallacy of composition and we’re better served to stay very aware of the distinctions between those three frameworks.
No, it just isn’t. Game theory is completely agnostic about what the preferences of the players are based on. Game theory takes a payoff matrix and calculates things like Nash Equilibrium and Dominant Strategies. The verbal description of why the payoff matrix happens to be as it is is fluff.
As soon as you allow altruistic interests the game ceases to be a Prisoner’s Dilemma. The dilemma (and game theory in general) relies on the players being perfectly selfish in the sense that they ruthlessly maximise their own payoffs as they are defined, not in the sense that those payoffs must never refer to aspects of the universe that happen to include the physical state of the other agents.
Consider the Codependent Prisoner’s Dilemma. Romeo and Juliet have been captured and the guards are trying to extort confessions out of them. However Romeo and Juliet are both lovesick and infatuated and care only about what happens to their lover, not what happens to themselves. Naturally the guards offer Romeo the deal “If you confess we’ll let Juliet go and you’ll get 10 years but if you don’t confess you’ll both get 1 year” (and vice versa, with a both confess clause in there somewhere). Game theory is perfectly equipped at handling this game. In fact, so much so that it wouldn’t even bother calling it a new name. It’s just a Prisoner’s Dilemma and the fact that the conflict of interests between Romeo and Juliet happens to be based on codependent altruism rather than narcissism is outside the scope of what game theorists care about.
It seems from my perspective that we are talking past each other and that your responses are no longer tracking the original point. I don’t personally think that deserves upvotes, but others obviously differ.
Your original claim was that:
Now given that game theory is not making any normative claims, it can’t be saying things which are normatively bad. Similarly since game theory does not say that you should either go out and act like a game-theory-rational agent or that you should act as if others will do so, it can’t be saying anything instrumentally bad either.
I just don’t see how it could even be possible for game theory to do what you claim it does. That would be like stating that a document describing the rules of poker was instrumentally and normatively bad because it encouraged wasteful, zero-sum gaming. It would be mistaking description for prescription.
We have already agreed, I think, that there is nothing epistemically bad about game theory taken as it is.
Everything below responds to the off-track discussion above and can be safely ignored by posters not specifically interested in that digression.
In game theory each player’s payoff matrix is their own. Notice that Codependent Romeo does not care where Codependent Juliet ends up in her payoff matrix. If Codependent Romeo was altruistic in the sense of wanting to maximise Juliet’s satisfaction with her payoff, he’d be keeping silent. Because Codependent Romeo is game-theory-rational, he’s indifferent to Codependent Juliet’s satisfaction with her outcome and only cares about maximising his personal payoff.
The standard assumption in a game-theoretic analysis is that a poker player wants money, a chess player wants to win chess games and so on, and that they are indifferent to their opponent(s) opinion about the outcome, just as Codependent Romeo is maximising his own payoff matrix and is indifferent to Codependent Juliet’s.
That is what we attempt to convey when we tell people that game-theory-rational players are neither benevolent nor malevolent. Even if you incorporate something you want to call “altruism” into their preference order, they still don’t care directly about where anyone else ends up in those other peoples’ preference orders.
Not true. The word ‘connotations’ comes to mind. As does “reframing to the extent of outright redefining a critical keyword”. That is not a normatively neutral act. It is legitimate for me to judge it and I choose to do so—negatively.