Reasoning: the Bayesian update is correct, but the computation of expected benefit is incomplete. Among all universes, deciders are “group” deciders nine times as often as they are “individual” deciders. Thus, while being a decider indicates you are more likely to be in a tails-universe, the decision of a group decider is 1/9th as important as the decision of an individual decider.
That is to say, your update should shift probability weight toward you being a group decider, but you should recognize that changing your mind is a mildly good idea 9⁄10 of the time and a very bad idea 1⁄10 of the time, and that these should balance to you NOT changing your mind. Since we know that half the time the decision is made by an individual, their decision to not change their mind must be as important as all the decisions of the collective the other half the time.
If the decision of a group decider is “1/9th as important”, then what’s the correct way to calculate the expected benefit of saying “yea” in the second case? Do you have in mind something like 0.9*1000/9 + 0.1*100/1 = 110? This doesn’t look right :-(
Do you have in mind something like 0.9 1000⁄9 + 0.1 100⁄1 = 110? This doesn’t look right
This can be justified by change of rules: deciders get their part of total sum (to donate it of course). Then expected personal gain before:
for "yea": 0.5*(0.9*1000/9+0.1*0)+0.5*(0.9*0+0.1*100/1)=55
for "nay": 0.5*(0.9*700/9+0.1*0)+0.5*(0.9*0+0.1*700/1)=70
Expected personal gain for decider:
for "yea": 0.9*1000/9+0.1*100/1=110
for "nay": 0.9*700/9+0.1*700/1=140
Edit: corrected error in value of first expected benefit.
Edit: Hm, it is possible to reformulate Newcomb’s problem in similar fashion. One of subjects (A) is asked whether ze chooses one box or two boxes, another subject (B) is presented with two boxes with content per A’s choice. If they make identical decision, then they have what they choose, otherwise they get nothing.
And here’s a reformulation of Counterfactual Mugging in the same vein. Find two subjects who don’t care about each other’s welfare at all. Flip a coin to choose one of them who will be asked to give up $100. If ze agrees, the other one receives $10000.
This is very similar to a rephrasing of the Prisoner’s Dilemma known as the Chocolate Dilemma. Jimmy has the option of taking one piece of chocolate for himself, or taking three pieces and giving them to Jenny. Jenny faces the same choice: take one piece for herself or three pieces for Jimmy. This formulation makes it very clear that two myopically-rational people will do worse than two irrational people, and that mutual precommitment at the start is a good idea.
This stuff is still unclear to me, but there may be a post in here once we work it out. Would you like to cooperate on a joint one, or something?
Edit: Hm, it is possible to reformulate Newcomb’s problem in similar fashion. One of subjects (A) is asked whether ze chooses one box or two boxes, another subject (B) is presented with two boxes with content per A’s choice. If they make identical decision, then they have what they choose, otherwise they get nothing.
This kind of answer seems like on the right track, but I do not know of a good decision theory when you are not 100% “important”. I have an intuitive sense of what this means, but I don’t have a technical understanding of what it means to be merely part of a decision and not the full decision maker.
Have an upvote for noticing your own confusion. I posted the problem because I really want a technical understanding of the issues involved. Many commenters are offering intuitions that look hard to formalize and generalize.
I think my answer is actually equivalent to Nornagest’s.
The obvious answer is that the factors you divide by are (0.9 / 0.5) and (0.1 / 0.5), which results in the same expected value as the pre-arranged calculation.
I don’t quite understand the formal structure behind your informal argument. If the decision of a group decider is 1/9th as important, does this also invalidate the reasoning that “yea” → 550 in the first case? If no, why?
I claim that the first is correct.
Reasoning: the Bayesian update is correct, but the computation of expected benefit is incomplete. Among all universes, deciders are “group” deciders nine times as often as they are “individual” deciders. Thus, while being a decider indicates you are more likely to be in a tails-universe, the decision of a group decider is 1/9th as important as the decision of an individual decider.
That is to say, your update should shift probability weight toward you being a group decider, but you should recognize that changing your mind is a mildly good idea 9⁄10 of the time and a very bad idea 1⁄10 of the time, and that these should balance to you NOT changing your mind. Since we know that half the time the decision is made by an individual, their decision to not change their mind must be as important as all the decisions of the collective the other half the time.
If the decision of a group decider is “1/9th as important”, then what’s the correct way to calculate the expected benefit of saying “yea” in the second case? Do you have in mind something like 0.9*1000/9 + 0.1*100/1 = 110? This doesn’t look right :-(
This can be justified by change of rules: deciders get their part of total sum (to donate it of course). Then expected personal gain before:
Expected personal gain for decider:
Edit: corrected error in value of first expected benefit.
Edit: Hm, it is possible to reformulate Newcomb’s problem in similar fashion. One of subjects (A) is asked whether ze chooses one box or two boxes, another subject (B) is presented with two boxes with content per A’s choice. If they make identical decision, then they have what they choose, otherwise they get nothing.
And here’s a reformulation of Counterfactual Mugging in the same vein. Find two subjects who don’t care about each other’s welfare at all. Flip a coin to choose one of them who will be asked to give up $100. If ze agrees, the other one receives $10000.
This is very similar to a rephrasing of the Prisoner’s Dilemma known as the Chocolate Dilemma. Jimmy has the option of taking one piece of chocolate for himself, or taking three pieces and giving them to Jenny. Jenny faces the same choice: take one piece for herself or three pieces for Jimmy. This formulation makes it very clear that two myopically-rational people will do worse than two irrational people, and that mutual precommitment at the start is a good idea.
This stuff is still unclear to me, but there may be a post in here once we work it out. Would you like to cooperate on a joint one, or something?
I’m still unsure if it is something more than intuition pump. Anyway, I’ll share any interesting thoughts.
This is awesome! Especially the edit. Thanks.
It’s pure coordination game.
This kind of answer seems like on the right track, but I do not know of a good decision theory when you are not 100% “important”. I have an intuitive sense of what this means, but I don’t have a technical understanding of what it means to be merely part of a decision and not the full decision maker.
Can the Shapely value and its generalizations help us here? They deal with “how important was this part of the coalition to the final result?”.
Have an upvote for noticing your own confusion. I posted the problem because I really want a technical understanding of the issues involved. Many commenters are offering intuitions that look hard to formalize and generalize.
I think my answer is actually equivalent to Nornagest’s.
The obvious answer is that the factors you divide by are (0.9 / 0.5) and (0.1 / 0.5), which results in the same expected value as the pre-arranged calculation.
I don’t quite understand the formal structure behind your informal argument. If the decision of a group decider is 1/9th as important, does this also invalidate the reasoning that “yea” → 550 in the first case? If no, why?