Here’s a simpler equivalent version of the problem:
A program will change the color of the room based on a conditional random number pair (x,y). The first random number x is binary (a coin toss). If x comes up heads/1 then y is green with 90% probability and red with 10% probability. If x comes up tails/0 then y is green with 10% and red with 90%.
You are offered an initial bet that pays out +$1 if the room turns green but -$3 if the room turns red. This bet has an obvious initial negative EV. But if you observe the room turn green, the EV is now $0.7 (0.91 − 0.13), so you should then take it.
Creating more copies of an observer is just what the (classical) multiverse is doing anyway, as probability is just measure over slices of the multiverse compatible with observations (ala solomonoff/bayesianism etc)
How, what you are describing, is different from just a coin toss, where you win $1 if its Heads and loose 3$ if it’s Tails. Obviously negative EV. But then, when the coin is tossed you see that it happened to be Heads and you now wish that you’ve taken the bet, given your new knowledge of the outcome of the random event?
It is not dramatically different but there are 2 random variables: the first is a coin toss, and the 2nd random variable has p(green | heads) = 0.9, p(red | heads) = 0.1, p(green | tails) = 0.1, p(red | tails) = 0.9. So you need to multiply that out to get the conditional probabilities/payouts.
But my claim is that the seemingly complex bit where 18 vs 2 copies of you are created conditional on an event is identical to regular conditional probability. In other words my claim (which I thought was similar to your point in the post) is that regular probability is equivalent to measure over identical observers in the multiverse.
Here’s a simpler equivalent version of the problem:
A program will change the color of the room based on a conditional random number pair (x,y). The first random number x is binary (a coin toss). If x comes up heads/1 then y is green with 90% probability and red with 10% probability. If x comes up tails/0 then y is green with 10% and red with 90%.
You are offered an initial bet that pays out +$1 if the room turns green but -$3 if the room turns red. This bet has an obvious initial negative EV. But if you observe the room turn green, the EV is now $0.7 (0.91 − 0.13), so you should then take it.
Creating more copies of an observer is just what the (classical) multiverse is doing anyway, as probability is just measure over slices of the multiverse compatible with observations (ala solomonoff/bayesianism etc)
I don’t see how it’s equivalent.
How, what you are describing, is different from just a coin toss, where you win $1 if its Heads and loose 3$ if it’s Tails. Obviously negative EV. But then, when the coin is tossed you see that it happened to be Heads and you now wish that you’ve taken the bet, given your new knowledge of the outcome of the random event?
It is not dramatically different but there are 2 random variables: the first is a coin toss, and the 2nd random variable has p(green | heads) = 0.9, p(red | heads) = 0.1, p(green | tails) = 0.1, p(red | tails) = 0.9. So you need to multiply that out to get the conditional probabilities/payouts.
But my claim is that the seemingly complex bit where 18 vs 2 copies of you are created conditional on an event is identical to regular conditional probability. In other words my claim (which I thought was similar to your point in the post) is that regular probability is equivalent to measure over identical observers in the multiverse.