The deeper point is important, and I think you’re mistaken about the necessary and sufficient conditions for an isomorphism here.
If a human appears in a gnome’s cell, then that excludes the counterfactual world in which the human did not appear in the gnome’s cell. However, on UDT, the gnome’s decision does depend on the payoffs in that counterfactual world.
Thus, for the isomorphism argument to hold, the preferences of the human and gnome must align over counterfactual worlds as well as factual ones. It is not sufficient to have the same probabilities for payoffs given linked actions when you have to make a decision, you also have to have the same probabilities for payoffs given linked actions when you don’t have to make a decision.
Could you give a worked example of the correct action for the gnome with a human in their cell depending on the payoffs for the gnome without a human in their cell? (Assuming they know whether there’s a human in their cell, and know the three different possible sets of payoffs for the available actions—if these constraints were relaxed I think it would be clearly doable. As it is I’m doubtful.)
I already have a more detailed version here; see the different calcualtions for E[T] vs E[IT]. However, I’ll give you a short version. From the gnome’s perspective, the two different types of total utilitarian utility functions are: T = total $ over both cells IT = total $ over both cells if there’s a human in my cell, 0 otherwise. and the possible outcomes are p=1/4 for heads + no human in my cell p=1/4 for heads + human in my cell p=1/2 for tails + human in my cell.
As you can see, these two utility functions only differ when there is no human in the gnome’s cell. Moreover, by the assumptions of the problem, the utility functions of the gnomes are symmetric, and their decisions are also. UDT proper doesn’t apply to gnomes whose utility function is IT, because the function IT is different for each of the different gnomes, but the more general principle of linked decisions still applies due to the obvious symmetry between the gnomes’ situations, despite the differences in utility functions. Thus we assume a linked decision where either gnome recommends buying a ticket for $x.
The utility calculations are therefore E[T] = (1/4)(-x) + (1/4)(-x) + (1/2)2(1-x) = 1-(3/2)x (breakeven at 2⁄3) E[IT] = (1/4)(0) + (1/4)(-x) + (1/2)2(1-x) = 1-(5/4)x (breakeven at 4⁄5)
Thus gnomes who are indifferent when no human is present (U = IT) should precommit to a value of x=4/5, while gnomes who still care about the total $ when no human is present (U = T) should precommit to a value of x=2/3.
Note also that this is invariant under the choice of which constant value we use to represent indifference. For some constant C, the correct calculation would actually be E[IT | buy at $x] = (1/4)(C) + (1/4)(-x) + (1/2)2(1-x) = (1/4)C + 1-(5/4)x E[IT | don’t buy] = (1/4)(C) + (1/4)(0) + (1/2)(0) = (1/4)C and so the breakeven point remains at x = 4⁄5
Thanks for giving this great example. This works because in the total utilitarian case (and average utilitarian, and other more general possibilities) the payoff of one gnome depends on the action of the other, so they have to coordinate for maximum payoff. This effect doesn’t exist in any selfish case, which is what I was thinking about at the time. But this definitely shows that isomorphism can be more complicated than what I said.
The deeper point is important, and I think you’re mistaken about the necessary and sufficient conditions for an isomorphism here.
If a human appears in a gnome’s cell, then that excludes the counterfactual world in which the human did not appear in the gnome’s cell. However, on UDT, the gnome’s decision does depend on the payoffs in that counterfactual world.
Thus, for the isomorphism argument to hold, the preferences of the human and gnome must align over counterfactual worlds as well as factual ones. It is not sufficient to have the same probabilities for payoffs given linked actions when you have to make a decision, you also have to have the same probabilities for payoffs given linked actions when you don’t have to make a decision.
Could you give a worked example of the correct action for the gnome with a human in their cell depending on the payoffs for the gnome without a human in their cell? (Assuming they know whether there’s a human in their cell, and know the three different possible sets of payoffs for the available actions—if these constraints were relaxed I think it would be clearly doable. As it is I’m doubtful.)
I already have a more detailed version here; see the different calcualtions for E[T] vs E[IT]. However, I’ll give you a short version. From the gnome’s perspective, the two different types of total utilitarian utility functions are:
T = total $ over both cells
IT = total $ over both cells if there’s a human in my cell, 0 otherwise.
and the possible outcomes are
p=1/4 for heads + no human in my cell
p=1/4 for heads + human in my cell
p=1/2 for tails + human in my cell.
As you can see, these two utility functions only differ when there is no human in the gnome’s cell. Moreover, by the assumptions of the problem, the utility functions of the gnomes are symmetric, and their decisions are also. UDT proper doesn’t apply to gnomes whose utility function is IT, because the function IT is different for each of the different gnomes, but the more general principle of linked decisions still applies due to the obvious symmetry between the gnomes’ situations, despite the differences in utility functions. Thus we assume a linked decision where either gnome recommends buying a ticket for $x.
The utility calculations are therefore
E[T] = (1/4)(-x) + (1/4)(-x) + (1/2)2(1-x) = 1-(3/2)x (breakeven at 2⁄3)
E[IT] = (1/4)(0) + (1/4)(-x) + (1/2)2(1-x) = 1-(5/4)x (breakeven at 4⁄5)
Thus gnomes who are indifferent when no human is present (U = IT) should precommit to a value of x=4/5, while gnomes who still care about the total $ when no human is present (U = T) should precommit to a value of x=2/3.
Note also that this is invariant under the choice of which constant value we use to represent indifference. For some constant C, the correct calculation would actually be
E[IT | buy at $x] = (1/4)(C) + (1/4)(-x) + (1/2)2(1-x) = (1/4)C + 1-(5/4)x
E[IT | don’t buy] = (1/4)(C) + (1/4)(0) + (1/2)(0) = (1/4)C
and so the breakeven point remains at x = 4⁄5
Thanks for giving this great example. This works because in the total utilitarian case (and average utilitarian, and other more general possibilities) the payoff of one gnome depends on the action of the other, so they have to coordinate for maximum payoff. This effect doesn’t exist in any selfish case, which is what I was thinking about at the time. But this definitely shows that isomorphism can be more complicated than what I said.