I think the mention here of “unsurvivable” temperature misses this point from the simulation description:
their bodies repair themselves automatically, so there is no release from their suffering
I agree that the incentives are different if high temperatures are not survivable and/or that there is a release from suffering. In particular the best alternative to a negotiated agreement is probably for me to experience a short period of excruciating pain and then die. This means that any outcome cannot be worse for me than that.
Ah, true. But I still wouldn’t expect that the difference between 99 and 99.3 would matter much compared to the possibility of breaking the deadlock and going back to a non-torturous temperature. Essentially, if the equilibrium is 99, the worst that the others can do to you is raise it up to 99.3. Conversely, keeping your temperature at 30 sends a signal that someone is trying to lower it, and if even just another one joins you, you get 89.6. At which point temperature might go even lower if others pick up on the trend. Essentially, as things are presented here, there is no reason why the equilibrium ought to be stable.
“Stable Nash equilibrium” is a term-of-art that I don’t think you meant to evoke, but it’s true that you can reach better states if multiple people act in concert. Saying this is a Nash equilibrium only means that no single player can do better, if you assume that everyone else is a robot that is guaranteed to keep following their current strategy no matter what.
This equilibrium is a local maximum surrounded by a tiny moat of even-worse outcomes. The moat is very thin, and almost everything beyond it is better than this, but you need to pass through the even-worse moat in order to get to anything better. (And you can’t cross the moat unilaterally.)
Of course, it’s easy to vary the parameters of this thought-experiment to make the moat wider. If you set the equilibrium at 98 instead of 99, then you’d need 3 defectors to do better, instead of only 2; etc.
So you can say “this is such an extreme example that I don’t expect real humans to actually follow it”, but that’s only a difference in degree, not a difference in kind. It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It is, in theory, but I feel like this underrates the real reason for most such situations: actual asymmetries in values, information, or both. A few things that may hold an otherwise pointless taboo or rule in place:
it serving as a shibboleth that identifies the in-group. This is a tangible benefit in certain situations. It’s true that another could be chosen and it could be something that is also more inherently worthy rather than just conventionally picked, but that requires time and adjusting and may create confusion
it being tied to some religious or ideological worldview such that at least some people genuinely believe it’s beneficial, and not just a convention. That makes them a lot more resistant to dropping it even if there was an attempt at coordination
it having become something that is genuinely unpleasant to drop even at an individual level simply because force of habit has led some individuals to internalize it.
In general I think the game theoretical model honestly doesn’t represent anything like a real world situation well because it creates a situation that’s so abstract and extreme, it’s impossible to imagine any of these dynamics at work. Even the worst, most dystopian totalitarianism in which everyone spies on everyone else and everyone’s life is miserable will at least have been started by a group of true believers who think this is genuinely a good thing.
I contend examples are easy to find even after you account for all of those things you listed. If you’d like a more in-depth exploration of this topic, you might be interested in the book Inadequate Equilibria.
I’ve read Inadequate Equilibria, but that’s exactly the thing, this specific example doesn’t really convey that sort of situation. At the very least, some social interaction as well as the path to the pathological equilibrium are crucial to it. By stripping down all of that, the 99 C example makes no sense. They’re an integral part of why such things happen.
″...everyone’s utility in a given round … is the negative of the average temperature.” Why would we assume that?
”Clearly, this is feasible, because it’s happening.” Is this rational? Isn’t this synonymous with saying “clearly my scenario makes sense because my scenario says so”?
“Each prisoner’s min-max payoff is −99.3” If everyone else is min-maxing against any given individual, you would have a higher payoff if you set your dial to 0, no? The worst total payoff would be −99.
What am I missing? Can anyone bridge this specific gap for me?
“Feasible” is being used as a technical term-of-art, not a value judgment. It basically translates to “physically possible”. You can’t have an equilibrium of 101 because the dials only go up to 100, so 101 is not “feasible”.
The min-max payoff is −99.3 because the dials only go down to 30, not to 0.
We’re assuming that utility function because it’s a simple thought-experiment meant to illustrate a general principle, and assuming something more complicated would just make the illustration more complicated. It’s part of the premise of the thought-experiment, just like assuming that people are in cages with dials that go from 30 to 100 is part of the premise.
The problem is that the model is so stripped down it doesn’t illustrate the principle any more. The principle, as I understand it, is that there are certain “everyone does X” equilibria in which X doesn’t have to be useful or even good per se, it’s just something everyone’s agreed upon. That’s true, but only to a certain point. Past a certain degree of utter insanity and masochism, people start solving the coordination problem by reasonably assuming that no one else can actually want X, and may try rebellion. In the thermostat example, a turn in which simply two prisoners rebelled would be enough to get a lower temperature even if the others tried to punish them. At that point the process would snowball. It’s only “stable” to the minimum possible perturbation of a single person turning the knob to 30, and deciding it’s not worth it any more after one turn at a mere 0.3 C above the already torturous temperature of 99 C.
I’m confused. Are you saying that the example is bad because the utility function of “everyone wants to minimize the average temperature” is too simplified? If not, why is this being posted as a reply to this chain?
I think the mention here of “unsurvivable” temperature misses this point from the simulation description:
I agree that the incentives are different if high temperatures are not survivable and/or that there is a release from suffering. In particular the best alternative to a negotiated agreement is probably for me to experience a short period of excruciating pain and then die. This means that any outcome cannot be worse for me than that.
Ah, true. But I still wouldn’t expect that the difference between 99 and 99.3 would matter much compared to the possibility of breaking the deadlock and going back to a non-torturous temperature. Essentially, if the equilibrium is 99, the worst that the others can do to you is raise it up to 99.3. Conversely, keeping your temperature at 30 sends a signal that someone is trying to lower it, and if even just another one joins you, you get 89.6. At which point temperature might go even lower if others pick up on the trend. Essentially, as things are presented here, there is no reason why the equilibrium ought to be stable.
“Stable Nash equilibrium” is a term-of-art that I don’t think you meant to evoke, but it’s true that you can reach better states if multiple people act in concert. Saying this is a Nash equilibrium only means that no single player can do better, if you assume that everyone else is a robot that is guaranteed to keep following their current strategy no matter what.
This equilibrium is a local maximum surrounded by a tiny moat of even-worse outcomes. The moat is very thin, and almost everything beyond it is better than this, but you need to pass through the even-worse moat in order to get to anything better. (And you can’t cross the moat unilaterally.)
Of course, it’s easy to vary the parameters of this thought-experiment to make the moat wider. If you set the equilibrium at 98 instead of 99, then you’d need 3 defectors to do better, instead of only 2; etc.
So you can say “this is such an extreme example that I don’t expect real humans to actually follow it”, but that’s only a difference in degree, not a difference in kind. It’s pretty easy to find real-life examples where actual humans are actually stuck in an equilibrium that is strictly worse than some other equilibrium they theoretically could have, because switching would require coordination between a bunch of people at once (not just 2 or 3).
It is, in theory, but I feel like this underrates the real reason for most such situations: actual asymmetries in values, information, or both. A few things that may hold an otherwise pointless taboo or rule in place:
it serving as a shibboleth that identifies the in-group. This is a tangible benefit in certain situations. It’s true that another could be chosen and it could be something that is also more inherently worthy rather than just conventionally picked, but that requires time and adjusting and may create confusion
it being tied to some religious or ideological worldview such that at least some people genuinely believe it’s beneficial, and not just a convention. That makes them a lot more resistant to dropping it even if there was an attempt at coordination
it having become something that is genuinely unpleasant to drop even at an individual level simply because force of habit has led some individuals to internalize it.
In general I think the game theoretical model honestly doesn’t represent anything like a real world situation well because it creates a situation that’s so abstract and extreme, it’s impossible to imagine any of these dynamics at work. Even the worst, most dystopian totalitarianism in which everyone spies on everyone else and everyone’s life is miserable will at least have been started by a group of true believers who think this is genuinely a good thing.
I contend examples are easy to find even after you account for all of those things you listed. If you’d like a more in-depth exploration of this topic, you might be interested in the book Inadequate Equilibria.
I’ve read Inadequate Equilibria, but that’s exactly the thing, this specific example doesn’t really convey that sort of situation. At the very least, some social interaction as well as the path to the pathological equilibrium are crucial to it. By stripping down all of that, the 99 C example makes no sense. They’re an integral part of why such things happen.
That’s correct, but that just makes this a worse (less intuitive) version of the stag hunt.
I’m in the same boat.
″...everyone’s utility in a given round … is the negative of the average temperature.” Why would we assume that?
”Clearly, this is feasible, because it’s happening.” Is this rational? Isn’t this synonymous with saying “clearly my scenario makes sense because my scenario says so”?
“Each prisoner’s min-max payoff is −99.3” If everyone else is min-maxing against any given individual, you would have a higher payoff if you set your dial to 0, no? The worst total payoff would be −99.
What am I missing? Can anyone bridge this specific gap for me?
“Feasible” is being used as a technical term-of-art, not a value judgment. It basically translates to “physically possible”. You can’t have an equilibrium of 101 because the dials only go up to 100, so 101 is not “feasible”.
The min-max payoff is −99.3 because the dials only go down to 30, not to 0.
We’re assuming that utility function because it’s a simple thought-experiment meant to illustrate a general principle, and assuming something more complicated would just make the illustration more complicated. It’s part of the premise of the thought-experiment, just like assuming that people are in cages with dials that go from 30 to 100 is part of the premise.
The problem is that the model is so stripped down it doesn’t illustrate the principle any more. The principle, as I understand it, is that there are certain “everyone does X” equilibria in which X doesn’t have to be useful or even good per se, it’s just something everyone’s agreed upon. That’s true, but only to a certain point. Past a certain degree of utter insanity and masochism, people start solving the coordination problem by reasonably assuming that no one else can actually want X, and may try rebellion. In the thermostat example, a turn in which simply two prisoners rebelled would be enough to get a lower temperature even if the others tried to punish them. At that point the process would snowball. It’s only “stable” to the minimum possible perturbation of a single person turning the knob to 30, and deciding it’s not worth it any more after one turn at a mere 0.3 C above the already torturous temperature of 99 C.
I’m confused. Are you saying that the example is bad because the utility function of “everyone wants to minimize the average temperature” is too simplified? If not, why is this being posted as a reply to this chain?