Consider the case (1) where Omega comes to you and says “If the Rth digit of pi is <5, then I created 1e500 copies of you.” and then you have to make some decision. Compare that to the case (2) where Omega uses a quantum coin toss instead.
Can you think of any decision that you’d want to make differently in case 1 than in case 2?
In case 1 you’re deciding to do something that might affect 1 person or 1e500 people (with equal probability) and in case 2 you’re deciding to do something that affects 1 person AND 500 people (each with half the measure).
Since you decide over expected values anyway, it shouldn’t matter. It looks like the difference between 50% chance of 2 utilons and 100% chance of one utilon.
In the first case you still don’t know which ones are “actual” and which ones are “impossible” so you still have to decide based on expected actual entities.
One plausible reason is risk-averse optimization of welfare on {actual entities}. If you buy Tegmark level IV, then with the quantum coin, you are guaranteed that the upside of the bet will materialize (or be realized), whereas with Pi, you might lose in every part of the multiverse.
In the long-run, the two will come out the same, i.e. given a series of logical bets L1, L2, … and a long series of quantum bets Q1, Q2, … the average welfare increase over the whole multiverse from expected utility maximization on the L bets will be the same as the average welfare increase over the whole multiverse from expected utility maximization on the Q bets.
However, if you have just one bet to make, a “Q” bet is a guaranteed payoff somewhere, but the same cannot be said for an “L” bet.
Good point. I still have some hard to verbalize thoughts against this, but I’ll have to think about it more to tease them out.
Since risk aversion is a result of a ‘convex frown’ utility function, and since we’re talking about differences in number of entities, we’d have to have a utility function that is convex frown over number of entities. This means that the “shut up and multiply” rule for saving lives would be just a first order approximation that is valid near the margin. It’s certainly possible to have this type of preference, but I have a hunch that this isn’t the case.
For there to be a difference, you’d also have to be indifferent between extra observers in a populated Everett branch and extra observers in an emtpy one, but that seems likely.
I don’t think that helps.
Consider the case (1) where Omega comes to you and says “If the Rth digit of pi is <5, then I created 1e500 copies of you.” and then you have to make some decision. Compare that to the case (2) where Omega uses a quantum coin toss instead.
Can you think of any decision that you’d want to make differently in case 1 than in case 2?
In case 1 you’re deciding to do something that might affect 1 person or 1e500 people (with equal probability) and in case 2 you’re deciding to do something that affects 1 person AND 500 people (each with half the measure).
Since you decide over expected values anyway, it shouldn’t matter. It looks like the difference between 50% chance of 2 utilons and 100% chance of one utilon.
If you have preferences about, e.g. the average level of well-being among actual entities, it could matter a lot.
How so?
In the first case you still don’t know which ones are “actual” and which ones are “impossible” so you still have to decide based on expected actual entities.
One plausible reason is risk-averse optimization of welfare on {actual entities}. If you buy Tegmark level IV, then with the quantum coin, you are guaranteed that the upside of the bet will materialize (or be realized), whereas with Pi, you might lose in every part of the multiverse.
In the long-run, the two will come out the same, i.e. given a series of logical bets L1, L2, … and a long series of quantum bets Q1, Q2, … the average welfare increase over the whole multiverse from expected utility maximization on the L bets will be the same as the average welfare increase over the whole multiverse from expected utility maximization on the Q bets.
However, if you have just one bet to make, a “Q” bet is a guaranteed payoff somewhere, but the same cannot be said for an “L” bet.
Good point. I still have some hard to verbalize thoughts against this, but I’ll have to think about it more to tease them out.
Since risk aversion is a result of a ‘convex frown’ utility function, and since we’re talking about differences in number of entities, we’d have to have a utility function that is convex frown over number of entities. This means that the “shut up and multiply” rule for saving lives would be just a first order approximation that is valid near the margin. It’s certainly possible to have this type of preference, but I have a hunch that this isn’t the case.
For there to be a difference, you’d also have to be indifferent between extra observers in a populated Everett branch and extra observers in an emtpy one, but that seems likely.