More freedom is always good from an individual rationality perspective, but game theory has lots of situations where giving more behavior options to one agent causes harm to everyone, or where imposing a restriction makes everyone better off. For example, if we’re playing the Centipede game and I somehow make it impossible for myself to play “down” for the first 50 turns—unilaterally, without requiring any matching commitment on your part—then we both win much more than we otherwise would.
Well, if you make it impossible for you to play down, then that’s a perfectly valid exercise of your control over your own life, isn’t it? For a paradox, you should consider whether I would impose that restriction on you (or at least whether I would take part in the enforcement mechanism of your previously chosen constraint when you change your mind).
Usually in situations like this, I think that the best thing to do is to figure out why the payoffs work in that way and then try to work with you to beat the system. If that’s not possible now, then I would usually announce my intention to cooperate, then do so, to build trust (and maybe guilt if you defect) for future interactions.
If I’m playing the game as part of an experiment, so that it really is just a game in the ordinary sense, then I would try to predict your behaviour and play accordingly; this has much more to do with psychology than game theory. I wouldn’t have to force you to cooperate on the first 50 turns if I could convince you of the truth: that I would cooperate on those turns anyway, because I already predict that you will cooperate on those turns.
If the centipede game, or any of the standard examples from game theory, really is the entire world, then freedom really isn’t a very meaningful concept anyway.
Well, if you make it impossible for you to play down, then that’s a perfectly valid exercise of your control over your own life, isn’t it?
Then you make it a tautology that “freedom is good”, because any restriction on freedom that leads to an increase of good will be rebranded as a “valid exercise of control”. Maybe I should give an example of the reverse case, where adding freedom makes everyone worse off. See Braess’s paradox: adding a new free road to the road network, while keeping the number of drivers constant, can make every driver take longer to reach their destination. (And yes, this situation has been observed to often occur in real life.) Of course this is just another riff on the Nash equilibrium theme, but you should think more carefully about what your professed values entail.
Then you make it a tautology that “freedom is good”
Yes, it’s my ultimate social value! That’s not a tautology, but an axiom. I don’t like it because I believe that it maximises happiness (or whatever), I just like it.
Braess’s paradox
Yes, this is more interesting, especially when closing a road would improve traffic flow. People have to balance their desire to drive on the old road with their desire to drive in decongested traffic. If the drivers have control over whether to close the road, then the paradox dissolves (at least if all of the drivers think alike). But if the road closure is run by an outside authority, then I would oppose closing the road, even if it’s ‘for their own good’.
As currently described at your link, that one doesn’t seem so hard. Person 2 simply says to Person 1 ‘If you don’t read it, then I will.’, to which Person 1 will agree. There’s no real force involved; if Person 1 puts down the book, then Person 2 picks it up, that’s all. I know that this doesn’t change the fact that the theorem holds, but the theorem doesn’t seem terribly relevant to real life.
But Person 1 is still being manipulated by a threat, so let’s apply the idea of freedom instead. Then the preferences of Persons 1 and 2 may begin as in the problem statement, but Person 1 (upon sober reflection) allows Person 2′s preferences to override Person 1′s preferences, when those preferences are only about Person 2′s life, and vice versa. Then Person 1 and Person 2 both end up wanting y,z,x; Person 1 grudgingly, but with respect for Person 2′s rights, gives up the book, while Person 2 refrains from any manipulative threats, out of respect for Person 1.
More freedom makes signaling of what you’ll actually do more difficult.
Yes, this is something that I worry about. You can try to force your signal to be accurate by entering a contract, but even if you signed a contract in the past, how can anybody enforce the contract now without impinging on your present freedom? The best that I’ve come up with so far is to use trust metrics, like a credit rating. (Payment of debts is pretty much unenforceable in the modern First World, which is why they invented credit reports.)
Thomas Schelling gives many examples of incentivising agreements instead of enforcing them. Here’s one: you and I want to spend 1 million dollars each on producing a nonexcludable common good that will give each of us 1.5 million in revenue. (So each dollar spent on the good creates 1.5 dollars in revenue that have to be evenly split among us both, no matter who spent the initial dollar.) Individually, it’s better for me if you spend the million and I don’t, because this way I end up with 1.75 million instead of 1.5. Schelling’s answer is spreading the investment out in time: you invest a penny, I see it and invest a penny in turn, and so on. This way it costs almost nothing for us both to establish mutual trust from the start, and it becomes rational to keep cooperating every step of the way.
The paradoxical decision theorist would still say, ‘You fool! Don’t put in a penny; your rational opponent won’t reciprocate, and you’ll be out a farthing.’. Fortunately nobody behaves this way, and it wouldn’t be rational to predict it.
I would probably put in half a million right away, if I don’t know you at all other than knowing that you value the good like I do. I’m sure that you can find a way to manipulate me to my detriment if you know that, since it’s based on nothing more than a hunch; and actually this is the sort of place where I would expect to see a lecture as to exactly how you would do so, so please fire away! (Of course, any actual calculation as to how fast to proceed depends on the time discounting and the overhead of it all, so there is no single right answer.)
I agree, slowly building up trust over time is an excellent tactic. Looking up somebody’s trust metric is only for strangers.
You are never free to change what you actually are and what you actually want, so these invariants can be used to force a choice on you by making it the best one available.
That’s the reason bad things happen. Before the added capacity, drivers’ actions are restricted by problem statement, so signaling isn’t needed, its role is filled. If all drivers decide to ignore the addition, and effectively signal to each other that they actually will, they end up with the old plan, better than otherwise, and so would choose to precommit to that restriction. More freedom made signaling the same plan more difficult, by reducing information. But of course, with new capacity they could in principle find an even better plan, if only they could precommit to it (coordinate their actions).
More freedom is always good from an individual rationality perspective, but game theory has lots of situations where giving more behavior options to one agent causes harm to everyone, or where imposing a restriction makes everyone better off. For example, if we’re playing the Centipede game and I somehow make it impossible for myself to play “down” for the first 50 turns—unilaterally, without requiring any matching commitment on your part—then we both win much more than we otherwise would.
Well, if you make it impossible for you to play down, then that’s a perfectly valid exercise of your control over your own life, isn’t it? For a paradox, you should consider whether I would impose that restriction on you (or at least whether I would take part in the enforcement mechanism of your previously chosen constraint when you change your mind).
Usually in situations like this, I think that the best thing to do is to figure out why the payoffs work in that way and then try to work with you to beat the system. If that’s not possible now, then I would usually announce my intention to cooperate, then do so, to build trust (and maybe guilt if you defect) for future interactions.
If I’m playing the game as part of an experiment, so that it really is just a game in the ordinary sense, then I would try to predict your behaviour and play accordingly; this has much more to do with psychology than game theory. I wouldn’t have to force you to cooperate on the first 50 turns if I could convince you of the truth: that I would cooperate on those turns anyway, because I already predict that you will cooperate on those turns.
If the centipede game, or any of the standard examples from game theory, really is the entire world, then freedom really isn’t a very meaningful concept anyway.
Then you make it a tautology that “freedom is good”, because any restriction on freedom that leads to an increase of good will be rebranded as a “valid exercise of control”. Maybe I should give an example of the reverse case, where adding freedom makes everyone worse off. See Braess’s paradox: adding a new free road to the road network, while keeping the number of drivers constant, can make every driver take longer to reach their destination. (And yes, this situation has been observed to often occur in real life.) Of course this is just another riff on the Nash equilibrium theme, but you should think more carefully about what your professed values entail.
Yes, it’s my ultimate social value! That’s not a tautology, but an axiom. I don’t like it because I believe that it maximises happiness (or whatever), I just like it.
Yes, this is more interesting, especially when closing a road would improve traffic flow. People have to balance their desire to drive on the old road with their desire to drive in decongested traffic. If the drivers have control over whether to close the road, then the paradox dissolves (at least if all of the drivers think alike). But if the road closure is run by an outside authority, then I would oppose closing the road, even if it’s ‘for their own good’.
Also maybe relevant: Sen’s paradox. If you can’t tell, I love this stuff and could go on listing it all day :-)
As currently described at your link, that one doesn’t seem so hard. Person 2 simply says to Person 1 ‘If you don’t read it, then I will.’, to which Person 1 will agree. There’s no real force involved; if Person 1 puts down the book, then Person 2 picks it up, that’s all. I know that this doesn’t change the fact that the theorem holds, but the theorem doesn’t seem terribly relevant to real life.
But Person 1 is still being manipulated by a threat, so let’s apply the idea of freedom instead. Then the preferences of Persons 1 and 2 may begin as in the problem statement, but Person 1 (upon sober reflection) allows Person 2′s preferences to override Person 1′s preferences, when those preferences are only about Person 2′s life, and vice versa. Then Person 1 and Person 2 both end up wanting y,z,x; Person 1 grudgingly, but with respect for Person 2′s rights, gives up the book, while Person 2 refrains from any manipulative threats, out of respect for Person 1.
More freedom makes signaling of what you’ll actually do more difficult. All else equal, freedom is good.
Yes, this is something that I worry about. You can try to force your signal to be accurate by entering a contract, but even if you signed a contract in the past, how can anybody enforce the contract now without impinging on your present freedom? The best that I’ve come up with so far is to use trust metrics, like a credit rating. (Payment of debts is pretty much unenforceable in the modern First World, which is why they invented credit reports.)
What Nesov said.
Thomas Schelling gives many examples of incentivising agreements instead of enforcing them. Here’s one: you and I want to spend 1 million dollars each on producing a nonexcludable common good that will give each of us 1.5 million in revenue. (So each dollar spent on the good creates 1.5 dollars in revenue that have to be evenly split among us both, no matter who spent the initial dollar.) Individually, it’s better for me if you spend the million and I don’t, because this way I end up with 1.75 million instead of 1.5. Schelling’s answer is spreading the investment out in time: you invest a penny, I see it and invest a penny in turn, and so on. This way it costs almost nothing for us both to establish mutual trust from the start, and it becomes rational to keep cooperating every step of the way.
The paradoxical decision theorist would still say, ‘You fool! Don’t put in a penny; your rational opponent won’t reciprocate, and you’ll be out a farthing.’. Fortunately nobody behaves this way, and it wouldn’t be rational to predict it.
I would probably put in half a million right away, if I don’t know you at all other than knowing that you value the good like I do. I’m sure that you can find a way to manipulate me to my detriment if you know that, since it’s based on nothing more than a hunch; and actually this is the sort of place where I would expect to see a lecture as to exactly how you would do so, so please fire away! (Of course, any actual calculation as to how fast to proceed depends on the time discounting and the overhead of it all, so there is no single right answer.)
I agree, slowly building up trust over time is an excellent tactic. Looking up somebody’s trust metric is only for strangers.
You are never free to change what you actually are and what you actually want, so these invariants can be used to force a choice on you by making it the best one available.
Um, Braess’s paradox doesn’t involve signaling.
That’s the reason bad things happen. Before the added capacity, drivers’ actions are restricted by problem statement, so signaling isn’t needed, its role is filled. If all drivers decide to ignore the addition, and effectively signal to each other that they actually will, they end up with the old plan, better than otherwise, and so would choose to precommit to that restriction. More freedom made signaling the same plan more difficult, by reducing information. But of course, with new capacity they could in principle find an even better plan, if only they could precommit to it (coordinate their actions).