Stream of conciousness style answer. Not looking at other comments so I can see afterwards if my thinking is the same as anyone else’s.
The argument for saying yea once one is in the room seems to assume that everyone else will make the same decision as me, whatever my decision is. I’m still unsure whether this kind of thinking is allowed in general, but in this case it seems to be the source of the problem.
If we take the opposite assumption, that the other decisions are fixed, then the problem depends on those decisions. If we assume that all the others (if there are any) will say yea then U(yea) = 0.9$1000 + 0.1$100 = $910 = 0.91 lives while U(nay) = 0.90 + 0.1$700 = $70 = 0.07 lives, so clearly I say yea. If we assume that all the others (if there are any) will say nay then U(yea) = 0.90 + 0.1$100 = 0.01 lives while U(nay) = 1*$700 = 0.7 lives.
In other words, if we expect others to say yea, then we say yea, and if we expect others t say nay, then we say nay (if people say yea with some probability p then our response is either yea or nay depending on whether p is above some threshold). It appears we have a simple game theory problem with two nash equilibria. I’m not sure of a rule for deciding which equilibrium to pick, so I’ll try some toy problems.
If we have 10 people, each in a room with two buttons, one red and one blue, and they are told that if they all press the same colour then $1000 will be donated to Village Reach, but if there is any disagreement no money will be donated, then they have quite a difficult dilemma, two nash equilibria, but no way to single out one of them, which makes them unlikely to end up in either.
If we change the problem, so that now only $500 is donated when they all press blue, but $1000 is still given on red, then the decision becomes easy. I’m sure everyone will agree with me that you should definitely press red in this dilemma. It seems like in general a good rule for game-theory problems is “if faced with multiple nash equilibria, pick the one you would have agreed to in advance”. This might not give an answer in games where the various utility functions are opposed to each other, but it works fine in this problem.
So I say nay. Both before and after hearing that I am a decider. Problem solved.
Thinking about this a bit more, it seems like the problem came from Timeless Decision Theory, and was solved by Causal Decision Theory. A rather disturbing state of events.
I like how you apply game theory to the problem, but I don’t understand why it supports the answer “nay”. The calculations at the beginning of your comment seem to indicate that the “yea” equilibrium gives a higher expected payoff than the “nay” equilibrium, no?
If all ten individuals were discussing the problem in advance the would conclude that nay was better, so, by the rule I set up, when faced with the problem you should say nay.
The problem comes from mixing individual thinking, where you ask what is the best thing for you to do, with group thinking (no relation to groupthink), where you ask what is the best thing for the group to do. The rule I suggested can be expressed as “when individual thinking leaves you with more than one possible solution, use group thinking to decide between them”. Updating on the fact that you are a decider is compulsory in individual thinking but forbidden in group thinking, and problems arise when you get confused about this distinction.
Stream of conciousness style answer. Not looking at other comments so I can see afterwards if my thinking is the same as anyone else’s.
The argument for saying yea once one is in the room seems to assume that everyone else will make the same decision as me, whatever my decision is. I’m still unsure whether this kind of thinking is allowed in general, but in this case it seems to be the source of the problem.
If we take the opposite assumption, that the other decisions are fixed, then the problem depends on those decisions. If we assume that all the others (if there are any) will say yea then U(yea) = 0.9$1000 + 0.1$100 = $910 = 0.91 lives while U(nay) = 0.90 + 0.1$700 = $70 = 0.07 lives, so clearly I say yea. If we assume that all the others (if there are any) will say nay then U(yea) = 0.90 + 0.1$100 = 0.01 lives while U(nay) = 1*$700 = 0.7 lives.
In other words, if we expect others to say yea, then we say yea, and if we expect others t say nay, then we say nay (if people say yea with some probability p then our response is either yea or nay depending on whether p is above some threshold). It appears we have a simple game theory problem with two nash equilibria. I’m not sure of a rule for deciding which equilibrium to pick, so I’ll try some toy problems.
If we have 10 people, each in a room with two buttons, one red and one blue, and they are told that if they all press the same colour then $1000 will be donated to Village Reach, but if there is any disagreement no money will be donated, then they have quite a difficult dilemma, two nash equilibria, but no way to single out one of them, which makes them unlikely to end up in either.
If we change the problem, so that now only $500 is donated when they all press blue, but $1000 is still given on red, then the decision becomes easy. I’m sure everyone will agree with me that you should definitely press red in this dilemma. It seems like in general a good rule for game-theory problems is “if faced with multiple nash equilibria, pick the one you would have agreed to in advance”. This might not give an answer in games where the various utility functions are opposed to each other, but it works fine in this problem.
So I say nay. Both before and after hearing that I am a decider. Problem solved.
Thinking about this a bit more, it seems like the problem came from Timeless Decision Theory, and was solved by Causal Decision Theory. A rather disturbing state of events.
I like how you apply game theory to the problem, but I don’t understand why it supports the answer “nay”. The calculations at the beginning of your comment seem to indicate that the “yea” equilibrium gives a higher expected payoff than the “nay” equilibrium, no?
If all ten individuals were discussing the problem in advance the would conclude that nay was better, so, by the rule I set up, when faced with the problem you should say nay.
The problem comes from mixing individual thinking, where you ask what is the best thing for you to do, with group thinking (no relation to groupthink), where you ask what is the best thing for the group to do. The rule I suggested can be expressed as “when individual thinking leaves you with more than one possible solution, use group thinking to decide between them”. Updating on the fact that you are a decider is compulsory in individual thinking but forbidden in group thinking, and problems arise when you get confused about this distinction.