In the first case you still don’t know which ones are “actual” and which ones are “impossible” so you still have to decide based on expected actual entities.
One plausible reason is risk-averse optimization of welfare on {actual entities}. If you buy Tegmark level IV, then with the quantum coin, you are guaranteed that the upside of the bet will materialize (or be realized), whereas with Pi, you might lose in every part of the multiverse.
In the long-run, the two will come out the same, i.e. given a series of logical bets L1, L2, … and a long series of quantum bets Q1, Q2, … the average welfare increase over the whole multiverse from expected utility maximization on the L bets will be the same as the average welfare increase over the whole multiverse from expected utility maximization on the Q bets.
However, if you have just one bet to make, a “Q” bet is a guaranteed payoff somewhere, but the same cannot be said for an “L” bet.
Good point. I still have some hard to verbalize thoughts against this, but I’ll have to think about it more to tease them out.
Since risk aversion is a result of a ‘convex frown’ utility function, and since we’re talking about differences in number of entities, we’d have to have a utility function that is convex frown over number of entities. This means that the “shut up and multiply” rule for saving lives would be just a first order approximation that is valid near the margin. It’s certainly possible to have this type of preference, but I have a hunch that this isn’t the case.
For there to be a difference, you’d also have to be indifferent between extra observers in a populated Everett branch and extra observers in an emtpy one, but that seems likely.
If you have preferences about, e.g. the average level of well-being among actual entities, it could matter a lot.
How so?
In the first case you still don’t know which ones are “actual” and which ones are “impossible” so you still have to decide based on expected actual entities.
One plausible reason is risk-averse optimization of welfare on {actual entities}. If you buy Tegmark level IV, then with the quantum coin, you are guaranteed that the upside of the bet will materialize (or be realized), whereas with Pi, you might lose in every part of the multiverse.
In the long-run, the two will come out the same, i.e. given a series of logical bets L1, L2, … and a long series of quantum bets Q1, Q2, … the average welfare increase over the whole multiverse from expected utility maximization on the L bets will be the same as the average welfare increase over the whole multiverse from expected utility maximization on the Q bets.
However, if you have just one bet to make, a “Q” bet is a guaranteed payoff somewhere, but the same cannot be said for an “L” bet.
Good point. I still have some hard to verbalize thoughts against this, but I’ll have to think about it more to tease them out.
Since risk aversion is a result of a ‘convex frown’ utility function, and since we’re talking about differences in number of entities, we’d have to have a utility function that is convex frown over number of entities. This means that the “shut up and multiply” rule for saving lives would be just a first order approximation that is valid near the margin. It’s certainly possible to have this type of preference, but I have a hunch that this isn’t the case.
For there to be a difference, you’d also have to be indifferent between extra observers in a populated Everett branch and extra observers in an emtpy one, but that seems likely.