Linked decisions is also what makes the halfer paradox go away.
I don’t think linked decisions make the halfer paradox I brought up go away. Any counterintuitive decisions you make under UDT are simply ones that lead to you making a gain in a counterfactual possible worlds at the cost of a loss in actual possible worlds. However, in the instance above you’re losing both in the real scenario in which you’re Jack, and in the counterfactual one in which you turned out to be Roger.
Granted, the “halfer” paradox I raised is an argument against having a specific kind of indexical utility function (selfish utility w/ averaging over subjectively indistinguishable agents) rather than an argument against being a halfer in general. SSA, for example, would tell you to stick to your guns because you would still assign probability 1⁄2 even after you know whether you’re “Jack” or “Roger”, and thus doesn’t suffer from the same paradox. That said, due to the reference class problem, If you are told whether you’re Jack or Roger before being told everything else SSA would give the wrong answer, so it’s not like it’s any better...
To get a paradox that hits at the “thirder” position specifically, in the same way as yours did, I think you need only replace the ticket with something mutually beneficial—like putting on an enjoyable movie that both can watch. Then the thirder would double count the benefit of this, before finding out who they were.
Are you sure? It doesn’t seem to be that this would be paradoxical; since the decisions are linked you could argue that
“If I hadn’t put on an enjoyable movie for Jack/Roger, Jack/Roger wouldn’t have put on an enjoyable movie for me, and thus I would be worse off”.
If, on the other hand, only one agent gets to make that decision, then the agent-parts would have ceased to be subjectively indistinguishable as soon as one of them was offered the decision.
Did I make a mistake? It’s possible—I’m exhausted currently. Let’s go through this carefully. Can you spell out exactly why you think that halfers are such that:
They are only willing to pay 1⁄2 for a ticket.
They know that they must either be Jack or Roger.
They know that upon finding out which one they are, regardless of whether it’s Jack or Roger, they would be willing to pay 2⁄3.
I can see 1) and 2), but, thinking about it, I fail to see 3).
As I mentioned earlier, it’s not an argument against halfers in general; it’s against halfers with a specific kind of utility function, which sounds like this:
“In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
In the above scenario, there is a 1⁄2 chance that both Jack and Roger will be created, a 1⁄4 chance of only Jack, and a 1⁄4 chance of only Roger.
Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you’ve agreed) this would lead to a cutoff of 1⁄2.
After finding out whether you are, in fact, Jack or Roger, you have only one possible self in the TAILS world, and one possible self in the relevant HEADS+Jack/HEADS+Roger world, which leads to a 2:1 odds ratio and a cutoff of 2⁄3.
Ultimately, I guess the essence here is that this kind of utility function is equivalent to a failure to properly conditionalise, and thus even though you’re not using probabilities you’re still “Dutch-bookable” with respect to your own utility function.
I guess it could be argued that this result is somewhat trivial, but the utility function mentioned above is at least intuitively reasonable, so I don’t think it’s meaningless to show that having that kind of utility function is going to put you in trouble.
“In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
Oh. I see. The problem is that that utility takes a “halfer” position on combining utility (averaging) and “thirder” position on counterfactual worlds where the agent doesn’t exist (removing them from consideration). I’m not even sure it’s a valid utility function—it seems to mix utility and probability.
For example, in the heads world, it values “50% Roger vs 50% Jack” at the full utility amount, yet values only one of “Roger” and “Jack” at full utility. The correct way of doing this would be to value “50% Roger vs 50% Jack” at 50% - and then you just have a rescaled version of the thirder utility.
I think I see the idea you’re getting at, but I suspect that the real lesson of your example is that that mixed halfer/thirder idea cannot be made coherent in terms of utilities over worlds.
I don’t think that’s entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don’t exist, as do many other anthropic approaches.
Personally I’m generally skeptical of averaging over agents in any utility function.
Which is why I don’t use anthropic probability, because it leads to these kinds of absurdities. The halfer position is defined in the top post (as is the thirder), and your setup uses aspects of both approaches. If it’s incoherent, then SSA is incoherent, which I have no problem with. SSA != halfer.
Averaging makes a lot of sense if the number of agents is going to be increased and decreased in non-relevant ways.
Eg: you are an upload. Soon, you are going to experience eating a chocolate bar, then stubbing your toe, then playing a tough but intriguing game. During this time, you will be simulated on n computers, all running exactly the same program of you experiencing this, without any deviations. But n may vary from moment to moment. Should you be willing to pay to make n higher during pleasant experience or lower during unpleasant ones, given that you will never detect this change?
I think there are some rather significant assumptions underlying the idea that they are “non-relevant”. At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they’re indistinguishable then it’s a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.
the anthropic averaging I suggested in my previous comments leads to absurd results.
The anthropic averaging leads to absurd results only because it wasn’t a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.
I don’t think linked decisions make the halfer paradox I brought up go away. Any counterintuitive decisions you make under UDT are simply ones that lead to you making a gain in a counterfactual possible worlds at the cost of a loss in actual possible worlds. However, in the instance above you’re losing both in the real scenario in which you’re Jack, and in the counterfactual one in which you turned out to be Roger.
Granted, the “halfer” paradox I raised is an argument against having a specific kind of indexical utility function (selfish utility w/ averaging over subjectively indistinguishable agents) rather than an argument against being a halfer in general. SSA, for example, would tell you to stick to your guns because you would still assign probability 1⁄2 even after you know whether you’re “Jack” or “Roger”, and thus doesn’t suffer from the same paradox. That said, due to the reference class problem, If you are told whether you’re Jack or Roger before being told everything else SSA would give the wrong answer, so it’s not like it’s any better...
Are you sure? It doesn’t seem to be that this would be paradoxical; since the decisions are linked you could argue that “If I hadn’t put on an enjoyable movie for Jack/Roger, Jack/Roger wouldn’t have put on an enjoyable movie for me, and thus I would be worse off”. If, on the other hand, only one agent gets to make that decision, then the agent-parts would have ceased to be subjectively indistinguishable as soon as one of them was offered the decision.
Did I make a mistake? It’s possible—I’m exhausted currently. Let’s go through this carefully. Can you spell out exactly why you think that halfers are such that:
They are only willing to pay 1⁄2 for a ticket.
They know that they must either be Jack or Roger.
They know that upon finding out which one they are, regardless of whether it’s Jack or Roger, they would be willing to pay 2⁄3.
I can see 1) and 2), but, thinking about it, I fail to see 3).
As I mentioned earlier, it’s not an argument against halfers in general; it’s against halfers with a specific kind of utility function, which sounds like this: “In any possible world I value only my own current and future subjective happiness, averaged over all of the subjectively indistinguishable people who could equally be “me” right now.”
In the above scenario, there is a 1⁄2 chance that both Jack and Roger will be created, a 1⁄4 chance of only Jack, and a 1⁄4 chance of only Roger.
Before finding out who you are, averaging would lead to a 1:1 odds ratio, and so (as you’ve agreed) this would lead to a cutoff of 1⁄2.
After finding out whether you are, in fact, Jack or Roger, you have only one possible self in the TAILS world, and one possible self in the relevant HEADS+Jack/HEADS+Roger world, which leads to a 2:1 odds ratio and a cutoff of 2⁄3.
Ultimately, I guess the essence here is that this kind of utility function is equivalent to a failure to properly conditionalise, and thus even though you’re not using probabilities you’re still “Dutch-bookable” with respect to your own utility function.
I guess it could be argued that this result is somewhat trivial, but the utility function mentioned above is at least intuitively reasonable, so I don’t think it’s meaningless to show that having that kind of utility function is going to put you in trouble.
Oh. I see. The problem is that that utility takes a “halfer” position on combining utility (averaging) and “thirder” position on counterfactual worlds where the agent doesn’t exist (removing them from consideration). I’m not even sure it’s a valid utility function—it seems to mix utility and probability.
For example, in the heads world, it values “50% Roger vs 50% Jack” at the full utility amount, yet values only one of “Roger” and “Jack” at full utility. The correct way of doing this would be to value “50% Roger vs 50% Jack” at 50% - and then you just have a rescaled version of the thirder utility.
I think I see the idea you’re getting at, but I suspect that the real lesson of your example is that that mixed halfer/thirder idea cannot be made coherent in terms of utilities over worlds.
I don’t think that’s entirely correct; SSA, for example, is a halfer position and it does exclude worlds where you don’t exist, as do many other anthropic approaches.
Personally I’m generally skeptical of averaging over agents in any utility function.
Which is why I don’t use anthropic probability, because it leads to these kinds of absurdities. The halfer position is defined in the top post (as is the thirder), and your setup uses aspects of both approaches. If it’s incoherent, then SSA is incoherent, which I have no problem with. SSA != halfer.
Averaging makes a lot of sense if the number of agents is going to be increased and decreased in non-relevant ways.
Eg: you are an upload. Soon, you are going to experience eating a chocolate bar, then stubbing your toe, then playing a tough but intriguing game. During this time, you will be simulated on n computers, all running exactly the same program of you experiencing this, without any deviations. But n may vary from moment to moment. Should you be willing to pay to make n higher during pleasant experience or lower during unpleasant ones, given that you will never detect this change?
I think there are some rather significant assumptions underlying the idea that they are “non-relevant”. At the very least, if the agents were distinguishable, I think you should indeed be willing to pay to make n higher. On the other hand, if they’re indistinguishable then it’s a more difficult question, but the anthropic averaging I suggested in my previous comments leads to absurd results.
What’s your proposal here?
The anthropic averaging leads to absurd results only because it wasn’t a utility function over states of the world. Under heads, it ranked 50%Roger+50%Jack differently from the average utility of those two worlds.