If we are considering it from an individual perspective, then we need to hold the other individuals fixed, that is, we assume everyone else sticks to the tails plan:
In this case:
10% chance of becoming a decider with heads and causing a $1000 donation
90% chance of becoming a decider with tails and causing a $100 donation
That is 0.1 1000 + 0.9 100 = $190, which is a pretty bad deal.
If we want to allow everyone to switch, the difficulty is that the other people haven’t chosen their action yet (or even a set of actions with fixed probability), so we can’t really calculate expected values.
One way to approach this is to imagine that a random decider will be given a button and if they press it, everyone will change their view going with heads. The problem is that when heads comes up, the single decider will have 9 times the probability of getting the button than any of the individual deciders when tails comes up. So we get:
50% chance of causing a $1000 donation and 50% of causing a $100 donation. This is very similar to what Saturn was saying. However, there are subtle, but important differences.
Let’s suppose now that each decider gets a button and that they all press it or don’t press it due to being identical. If one presses a button and there is one decider, then they get the full effect. If there are 9 deciders, then the easiest way to model this situation will be to give each button one 1/9th of the effect. If we don’t do this, then we are clearly counting the benefits of everyone switching 9 times.
Damn, another one of my old comments and this one has a mistake. If we hold all of the other individuals fixed on the tails plan, then there’s a 100% chance that if you choose heads that no money is donated.
But also, UDT can just point out Bayesian updates only work within the scope of problems solvable by CDT. When agents’ decisions are linked, you need something like UDT and UDT doesn’t do any updates.
(Timeless decision theory may or may not do updates, but can’t handle the fact that choosing Yay means that your clones choose Yay when they are the sole decider. If you could make all agents choose Yay when your were a decider, but all choose Nay when you weren’t you’d score higher on average, but of course the linkage doesn’t work this way as their decision is based on what they see, not on what you see. This is the same issue that it has with Counterfactual Mugging).
Further update: Do you want to cause good to be done or do you want to be in a be in a world where good is done? That’s basically what this question comes down to.
If we are considering it from an individual perspective, then we need to hold the other individuals fixed, that is, we assume everyone else sticks to the tails plan:
In this case: 10% chance of becoming a decider with heads and causing a $1000 donation 90% chance of becoming a decider with tails and causing a $100 donation
That is 0.1 1000 + 0.9 100 = $190, which is a pretty bad deal.
If we want to allow everyone to switch, the difficulty is that the other people haven’t chosen their action yet (or even a set of actions with fixed probability), so we can’t really calculate expected values.
One way to approach this is to imagine that a random decider will be given a button and if they press it, everyone will change their view going with heads. The problem is that when heads comes up, the single decider will have 9 times the probability of getting the button than any of the individual deciders when tails comes up. So we get:
50% chance of causing a $1000 donation and 50% of causing a $100 donation. This is very similar to what Saturn was saying. However, there are subtle, but important differences.
Let’s suppose now that each decider gets a button and that they all press it or don’t press it due to being identical. If one presses a button and there is one decider, then they get the full effect. If there are 9 deciders, then the easiest way to model this situation will be to give each button one 1/9th of the effect. If we don’t do this, then we are clearly counting the benefits of everyone switching 9 times.
Damn, another one of my old comments and this one has a mistake. If we hold all of the other individuals fixed on the tails plan, then there’s a 100% chance that if you choose heads that no money is donated.
But also, UDT can just point out Bayesian updates only work within the scope of problems solvable by CDT. When agents’ decisions are linked, you need something like UDT and UDT doesn’t do any updates.
(Timeless decision theory may or may not do updates, but can’t handle the fact that choosing Yay means that your clones choose Yay when they are the sole decider. If you could make all agents choose Yay when your were a decider, but all choose Nay when you weren’t you’d score higher on average, but of course the linkage doesn’t work this way as their decision is based on what they see, not on what you see. This is the same issue that it has with Counterfactual Mugging).
Further update: Do you want to cause good to be done or do you want to be in a be in a world where good is done? That’s basically what this question comes down to.