My approach for dividing utility between copies gives the usual and expected solutions to the sleeping beauty problem: if all copies are offered bets, take 1⁄3 odds, if only one copy is offered bets, take 1⁄2 odds.
This makes sense, because my approach is analogous to “some future version of Sleeping Beauty gets to keep all the profits”.
The presumptuous philosopher problem is subtly different from the sleeping beauty problem. It can best be phrased as sleeping beauty problem where each copy doesn’t care for any other copy. Solving this is a bit more subtle, but an useful half-way point is the “Sleeping Anti-Beauty” problem.
Here, as before, one or two copies are created depending on the result of a coin flip. However, if two copies are created, they are the reverse of mutually altruistic: they derive disutility from the other copy achieving its utility. So if both copies receive $1, neither of their utilities increase: they are happy to have the cash, but angry the other copy also has cash.
Apart from this difference in indexical utility, the two copies are identical, and will reach the same decision. Now, as before, every copy is approached with bets on whether they are in the large universe (with two copies) or the small one (with a single copy). Using standard UDT/TDT Newcomb-problem type reasoning, they will always take the small universe side in any bet (as any gain/loss in the large universe is compensated for by the same gain/loss for the other copy they dislike).
Now, you could model the presumptuous philosopher by saying they have 50% chance of being in a Sleeping-Beauty (SB) situation and 50% of being in a Sleeping Anti-Beauty (SAB) situation (indifference modelled as half way between altruism and hate).
There are 4 equally likely possibilities here: small universe in SB, large universe in SB, small universe in SAB, large universe in SAB. A contract that gives $1 in a small universe is worth 0.25 + 0 + 0.25 + 0 = $0.5. While a contract that gives $1 in a large universe is worth 0 + 0.25*2 + 0 + 0 = $0.5 (as long as its offered to everyone). So it seems that a presumptuous philosopher should take even odds on the size of the universe if he doesn’t care about the other presumptuous philosophers.
It’s no coincidence this result can be reached by UDT-like arguments such as “take the objective probabilities of the universes, and consider the total impact of your decision being X, including all other decision that must be the same as yours”. I’m hoping to find more fundamental reasons to justify this approach soon.
Sleeping anti-beauty and the presumptuous philosopher
My approach for dividing utility between copies gives the usual and expected solutions to the sleeping beauty problem: if all copies are offered bets, take 1⁄3 odds, if only one copy is offered bets, take 1⁄2 odds.
This makes sense, because my approach is analogous to “some future version of Sleeping Beauty gets to keep all the profits”.
The presumptuous philosopher problem is subtly different from the sleeping beauty problem. It can best be phrased as sleeping beauty problem where each copy doesn’t care for any other copy. Solving this is a bit more subtle, but an useful half-way point is the “Sleeping Anti-Beauty” problem.
Here, as before, one or two copies are created depending on the result of a coin flip. However, if two copies are created, they are the reverse of mutually altruistic: they derive disutility from the other copy achieving its utility. So if both copies receive $1, neither of their utilities increase: they are happy to have the cash, but angry the other copy also has cash.
Apart from this difference in indexical utility, the two copies are identical, and will reach the same decision. Now, as before, every copy is approached with bets on whether they are in the large universe (with two copies) or the small one (with a single copy). Using standard UDT/TDT Newcomb-problem type reasoning, they will always take the small universe side in any bet (as any gain/loss in the large universe is compensated for by the same gain/loss for the other copy they dislike).
Now, you could model the presumptuous philosopher by saying they have 50% chance of being in a Sleeping-Beauty (SB) situation and 50% of being in a Sleeping Anti-Beauty (SAB) situation (indifference modelled as half way between altruism and hate).
There are 4 equally likely possibilities here: small universe in SB, large universe in SB, small universe in SAB, large universe in SAB. A contract that gives $1 in a small universe is worth 0.25 + 0 + 0.25 + 0 = $0.5. While a contract that gives $1 in a large universe is worth 0 + 0.25*2 + 0 + 0 = $0.5 (as long as its offered to everyone). So it seems that a presumptuous philosopher should take even odds on the size of the universe if he doesn’t care about the other presumptuous philosophers.
It’s no coincidence this result can be reached by UDT-like arguments such as “take the objective probabilities of the universes, and consider the total impact of your decision being X, including all other decision that must be the same as yours”. I’m hoping to find more fundamental reasons to justify this approach soon.