Now Omega takes 20 people and puts them in the same situation as in the original problem. It lets each of them flip their coins. Then it goes to each of the people who got tails, and offers $1 to charity for each coin that came up tails, but threatens to steal $3 from charity for each coin that came up heads.
It’s worth noting that if everyone got to make this choice separately—Omega doing it once for each person who responds—then it would indeed be wise for everyone to take the bet! This is evidence in favor of either Bostrom’s division-of-responsibility principle, or byrnema’s pointer-based viewpoint, if indeed those two views are nonequivalent.
Bostrom’s calculation is correct, but I believe it is an example of multiplying by the right coefficients for the wrong reasons.
I did exactly the same thing—multiplied by the right coefficients for the wrong reasons—in my deleted comment. I realized that the justification of these coefficients required a quite different problem (in my case, I modeled that all the green roomers decided to evenly divide the spoils of the whole group) and the only reason it worked was because multiplying the first term by 1⁄18 and the next term by 1⁄2 meant you were effectively canceling away that the factors the represented your initial 90% posterior, and thus ultimately just applying the 50⁄50 probability of the non-anthropic solution.
Anthropic calculation:
18/20(12)+2/20(-52) = 5.6
Bostrom-modified calculation for responsibility per person:
[18/20(12)/18+2/20(-52)/2] / 2 = −1
Non-anthropic calculation for EV per person:
[1/2(12)+1/2(-52)] /20 = −1
My pointer-based viewpoint, in contrast, is not a calculation but a rationale for why you must use the 50⁄50 probability rather than the 90⁄10 one. The argument is that each green roomer cannot use the information that they were in a green room because this information was preselected (a biased sample). With effectively no information about what color room they’re in, each green roomer must resort to the non-anthropic calculation that the probability of flipping heads is 50%.
I can very much relate to Eliezer’s original gut reaction: I agree that Nick’s calculation is very ad hoc and hardly justifiable.
However, I also think that, although you are right about the pointer bias, your explanation is still incomplete.
I think Psi-kosh made an important step with his reformulation. Especially eliminating the copy procedure for the agents was essential. If you follow through the math from the point of view of one of the agents, the nature of the problem becomes clear:
Trying to write down the payoff matrix from the viewpoint of one of the agents, it becomes clear that you can’t fill out any of the reward entries, since the outcome never depends on that agent’s decision alone. If he got a green marble, it still depends on other agents decision and if he drew a red one, it will depend only on other agent’s decision.
This makes it completely clear that the only solution is for the agents is to agree on a predetermined protocol and therefore the second calculation of the OP is the only correct one so far.
However, this protocol does not imply anything about P(head|being in green room). It is simply irrelevant for the expected value of any of the agreed upon protocol. One could create a protocol that depends on P(head|being in a green room) for some of the agents, but you would have to analyze the expected value of the protocol from a global point of view, not just from the point of view of the agent, for you can’t complete the decision matrix if the outcome depends on other agent’s decisions as well.
Of course a predetermined protocol does not mean that the agents must explicitly agree on a narrow protocol before the action. If we assume that the agents get all the information once they find themselves in the room, they could still create a mental model of the whole global situation and base their decision on the second calculation of the OP.
I agree with you that the reason why you can’t use the 90⁄10 prior is because the decision never depends on a person in a red room.
In Eliezer’s description of the problem above, he tells each green roomer that he asks all the green roomers if they want him to go ahead with a money distribution scheme, and they must be unanimous or there is a penalty.
I think this is a nice pedogogical component that helps a person understand the dilemma, but I would like to emphasize here (even if you’re aware of it) that it is completely superfluous to the mechanics of the problem. It doesn’t make any difference if Eliezer bases his action on the answer of one green roomer or all of them.
For one thing, all green roomer answers will be unanimous because they all have the same information and are asked the same complicated question.
And, more to the point, even if just one green roomer is asked, the dilemma still exists that he can’t use his prior that heads was probably flipped.
[EDIT:] Although I would be a bit more general: regardless of red rooms: if you have several actors, even if they necessarily make the same decision they have to analyze the global picture. The only situation when the agent should be allowed to make the simplified subjective Bayesian decision table analysis if he is the only actor (no copies, etc. It is easy to construct simple decision problems without “red rooms”: Where each of the actors have some control over the outcome and none of them can make the analysis for itself only but have to buid a model of the whole situation to make the globally optimal decision.)
However, I did not imply in any way that the penalty matters. (At least, as long as the agents are sane and don’t start to flip non-logical coins) The global analysis of the payoff may clearly disregard the penalty case if it’s impossible for that specific protocol. The only requirement is that the expected value calculation must be made protocol by protocol basis.
My intuition says that this is qualitatively different. If the agent knows that only one green roomer will be asked the question, then upon waking up in a green room the agent thinks “with 90% probability, there are 18 of me in green rooms and 2 of me in red rooms.” But then, if the agent is asked whether to take the bet, this new information (“I am the unique one being asked”) changes the probability back to 50-50.
It’s worth noting that if everyone got to make this choice separately—Omega doing it once for each person who responds—then it would indeed be wise for everyone to take the bet! This is evidence in favor of either Bostrom’s division-of-responsibility principle, or byrnema’s pointer-based viewpoint, if indeed those two views are nonequivalent.
EDIT: Never mind
Bostrom’s calculation is correct, but I believe it is an example of multiplying by the right coefficients for the wrong reasons.
I did exactly the same thing—multiplied by the right coefficients for the wrong reasons—in my deleted comment. I realized that the justification of these coefficients required a quite different problem (in my case, I modeled that all the green roomers decided to evenly divide the spoils of the whole group) and the only reason it worked was because multiplying the first term by 1⁄18 and the next term by 1⁄2 meant you were effectively canceling away that the factors the represented your initial 90% posterior, and thus ultimately just applying the 50⁄50 probability of the non-anthropic solution.
Anthropic calculation:
18/20(12)+2/20(-52) = 5.6
Bostrom-modified calculation for responsibility per person:
[18/20(12)/18+2/20(-52)/2] / 2 = −1
Non-anthropic calculation for EV per person:
[1/2(12)+1/2(-52)] /20 = −1
My pointer-based viewpoint, in contrast, is not a calculation but a rationale for why you must use the 50⁄50 probability rather than the 90⁄10 one. The argument is that each green roomer cannot use the information that they were in a green room because this information was preselected (a biased sample). With effectively no information about what color room they’re in, each green roomer must resort to the non-anthropic calculation that the probability of flipping heads is 50%.
I can very much relate to Eliezer’s original gut reaction: I agree that Nick’s calculation is very ad hoc and hardly justifiable.
However, I also think that, although you are right about the pointer bias, your explanation is still incomplete.
I think Psi-kosh made an important step with his reformulation. Especially eliminating the copy procedure for the agents was essential. If you follow through the math from the point of view of one of the agents, the nature of the problem becomes clear:
Trying to write down the payoff matrix from the viewpoint of one of the agents, it becomes clear that you can’t fill out any of the reward entries, since the outcome never depends on that agent’s decision alone. If he got a green marble, it still depends on other agents decision and if he drew a red one, it will depend only on other agent’s decision.
This makes it completely clear that the only solution is for the agents is to agree on a predetermined protocol and therefore the second calculation of the OP is the only correct one so far.
However, this protocol does not imply anything about P(head|being in green room). It is simply irrelevant for the expected value of any of the agreed upon protocol. One could create a protocol that depends on P(head|being in a green room) for some of the agents, but you would have to analyze the expected value of the protocol from a global point of view, not just from the point of view of the agent, for you can’t complete the decision matrix if the outcome depends on other agent’s decisions as well.
Of course a predetermined protocol does not mean that the agents must explicitly agree on a narrow protocol before the action. If we assume that the agents get all the information once they find themselves in the room, they could still create a mental model of the whole global situation and base their decision on the second calculation of the OP.
I agree with you that the reason why you can’t use the 90⁄10 prior is because the decision never depends on a person in a red room.
In Eliezer’s description of the problem above, he tells each green roomer that he asks all the green roomers if they want him to go ahead with a money distribution scheme, and they must be unanimous or there is a penalty.
I think this is a nice pedogogical component that helps a person understand the dilemma, but I would like to emphasize here (even if you’re aware of it) that it is completely superfluous to the mechanics of the problem. It doesn’t make any difference if Eliezer bases his action on the answer of one green roomer or all of them.
For one thing, all green roomer answers will be unanimous because they all have the same information and are asked the same complicated question.
And, more to the point, even if just one green roomer is asked, the dilemma still exists that he can’t use his prior that heads was probably flipped.
Agreed 100%.
[EDIT:] Although I would be a bit more general: regardless of red rooms: if you have several actors, even if they necessarily make the same decision they have to analyze the global picture. The only situation when the agent should be allowed to make the simplified subjective Bayesian decision table analysis if he is the only actor (no copies, etc. It is easy to construct simple decision problems without “red rooms”: Where each of the actors have some control over the outcome and none of them can make the analysis for itself only but have to buid a model of the whole situation to make the globally optimal decision.)
However, I did not imply in any way that the penalty matters. (At least, as long as the agents are sane and don’t start to flip non-logical coins) The global analysis of the payoff may clearly disregard the penalty case if it’s impossible for that specific protocol. The only requirement is that the expected value calculation must be made protocol by protocol basis.
My intuition says that this is qualitatively different. If the agent knows that only one green roomer will be asked the question, then upon waking up in a green room the agent thinks “with 90% probability, there are 18 of me in green rooms and 2 of me in red rooms.” But then, if the agent is asked whether to take the bet, this new information (“I am the unique one being asked”) changes the probability back to 50-50.