As I mentioned earlier, it’s not an argument against halfers in general; it’s against halfers with a specific kind of utility function, which sounds like this:
“In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
In the above scenario, there is a 1⁄2 chance that both Jack and Roger will be created, a 1⁄4 chance of only Jack, and a 1⁄4 chance of only Roger.
Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you’ve agreed) this would lead to a cutoff of 1⁄2.
After finding out whether you are, in fact, Jack or Roger, you have only one possible self in the TAILS world, and one possible self in the relevant HEADS+Jack/HEADS+Roger world, which leads to a 2:1 odds ratio and a cutoff of 2⁄3.
Ultimately, I guess the essence here is that this kind of utility function is equivalent to a failure to properly conditionalise, and thus even though you’re not using probabilities you’re still “Dutch-bookable” with respect to your own utility function.
I guess it could be argued that this result is somewhat trivial, but the utility function mentioned above is at least intuitively reasonable, so I don’t think it’s meaningless to show that having that kind of utility function is going to put you in trouble.
“In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
Oh. I see. The problem is that that utility takes a “halfer” position on combining utility (averaging) and “thirder” position on counterfactual worlds where the agent doesn’t exist (removing them from consideration). I’m not even sure it’s a valid utility function—it seems to mix utility and probability.
For example, in the heads world, it values “50% Roger vs 50% Jack” at the full utility amount, yet values only one of “Roger” and “Jack” at full utility. The correct way of doing this would be to value “50% Roger vs 50% Jack” at 50% - and then you just have a rescaled version of the thirder utility.
I think I see the idea you’re getting at, but I suspect that the real lesson of your example is that that mixed halfer/thirder idea cannot be made coherent in terms of utilities over worlds.
I don’t think that’s entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don’t exist, as do many other anthropic approaches.
Personally I’m generally skeptical of averaging over agents in any utility function.
Which is why I don’t use anthropic probability, because it leads to these kinds of absurdities. The halfer position is defined in the top post (as is the thirder), and your setup uses aspects of both approaches. If it’s incoherent, then SSA is incoherent, which I have no problem with. SSA != halfer.
Averaging makes a lot of sense if the number of agents is going to be increased and decreased in non-relevant ways.
Eg: you are an upload. Soon, you are going to experience eating a chocolate bar, then stubbing your toe, then playing a tough but intriguing game. During this time, you will be simulated on n computers, all running exactly the same program of you experiencing this, without any deviations. But n may vary from moment to moment. Should you be willing to pay to make n higher during pleasant experience or lower during unpleasant ones, given that you will never detect this change?
I think there are some rather significant assumptions underlying the idea that they are “non-relevant”. At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they’re indistinguishable then it’s a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.
the anthropic averaging I suggested in my previous comments leads to absurd results.
The anthropic averaging leads to absurd results only because it wasn’t a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.
As I mentioned earlier, it’s not an argument against halfers in general; it’s against halfers with a specific kind of utility function, which sounds like this: “In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
In the above scenario, there is a 1⁄2 chance that both Jack and Roger will be created, a 1⁄4 chance of only Jack, and a 1⁄4 chance of only Roger.
Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you’ve agreed) this would lead to a cutoff of 1⁄2.
After finding out whether you are, in fact, Jack or Roger, you have only one possible self in the TAILS world, and one possible self in the relevant HEADS+Jack/HEADS+Roger world, which leads to a 2:1 odds ratio and a cutoff of 2⁄3.
Ultimately, I guess the essence here is that this kind of utility function is equivalent to a failure to properly conditionalise, and thus even though you’re not using probabilities you’re still “Dutch-bookable” with respect to your own utility function.
I guess it could be argued that this result is somewhat trivial, but the utility function mentioned above is at least intuitively reasonable, so I don’t think it’s meaningless to show that having that kind of utility function is going to put you in trouble.
Oh. I see. The problem is that that utility takes a “halfer” position on combining utility (averaging) and “thirder” position on counterfactual worlds where the agent doesn’t exist (removing them from consideration). I’m not even sure it’s a valid utility function—it seems to mix utility and probability.
For example, in the heads world, it values “50% Roger vs 50% Jack” at the full utility amount, yet values only one of “Roger” and “Jack” at full utility. The correct way of doing this would be to value “50% Roger vs 50% Jack” at 50% - and then you just have a rescaled version of the thirder utility.
I think I see the idea you’re getting at, but I suspect that the real lesson of your example is that that mixed halfer/thirder idea cannot be made coherent in terms of utilities over worlds.
I don’t think that’s entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don’t exist, as do many other anthropic approaches.
Personally I’m generally skeptical of averaging over agents in any utility function.
Which is why I don’t use anthropic probability, because it leads to these kinds of absurdities. The halfer position is defined in the top post (as is the thirder), and your setup uses aspects of both approaches. If it’s incoherent, then SSA is incoherent, which I have no problem with. SSA != halfer.
Averaging makes a lot of sense if the number of agents is going to be increased and decreased in non-relevant ways.
Eg: you are an upload. Soon, you are going to experience eating a chocolate bar, then stubbing your toe, then playing a tough but intriguing game. During this time, you will be simulated on n computers, all running exactly the same program of you experiencing this, without any deviations. But n may vary from moment to moment. Should you be willing to pay to make n higher during pleasant experience or lower during unpleasant ones, given that you will never detect this change?
I think there are some rather significant assumptions underlying the idea that they are “non-relevant”. At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they’re indistinguishable then it’s a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.
What’s your proposal here?
The anthropic averaging leads to absurd results only because it wasn’t a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.