One cannot add agents to an anthropic situation and expect the situation to be necessarily unchanged.
The point of that is to allow me to analyse the problem without assuming the gnome example must be true. The real objections are in the subsequent points. Even if the gnomes argue something (still a big question what they argue), we still don’t have evidence that the humans should follow them.
To be blunt, this is a question you can solve. Since it’s a non-anthropic problem, though there is some danger in Beluga’ analysis, vanilla UDT is all that’s needed.
we still don’t have evidence that the humans should follow them
The evidence goes as follows: The gnomes are in the same situation as the humans, with the same options and the same payoffs. Although they started with different information than the humans (especially since the humans didn’t exist), at the time when they have to make the decision they have the same probabilities for payoffs given actions (although there’s a deeper point here that could bear elaboration). Therefore the right decision for the gnome is also the right decision for the human.
This sounds an awful lot like an isomorphism argument to me… What sort of standard of evidence would you say is appropriate for an isomorphism argument?
The deeper point is important, and I think you’re mistaken about the necessary and sufficient conditions for an isomorphism here.
If a human appears in a gnome’s cell, then that excludes the counterfactual world in which the human did not appear in the gnome’s cell. However, on UDT, the gnome’s decision does depend on the payoffs in that counterfactual world.
Thus, for the isomorphism argument to hold, the preferences of the human and gnome must align over counterfactual worlds as well as factual ones. It is not sufficient to have the same probabilities for payoffs given linked actions when you have to make a decision, you also have to have the same probabilities for payoffs given linked actions when you don’t have to make a decision.
Could you give a worked example of the correct action for the gnome with a human in their cell depending on the payoffs for the gnome without a human in their cell? (Assuming they know whether there’s a human in their cell, and know the three different possible sets of payoffs for the available actions—if these constraints were relaxed I think it would be clearly doable. As it is I’m doubtful.)
I already have a more detailed version here; see the different calcualtions for E[T] vs E[IT]. However, I’ll give you a short version. From the gnome’s perspective, the two different types of total utilitarian utility functions are: T = total $ over both cells IT = total $ over both cells if there’s a human in my cell, 0 otherwise. and the possible outcomes are p=1/4 for heads + no human in my cell p=1/4 for heads + human in my cell p=1/2 for tails + human in my cell.
As you can see, these two utility functions only differ when there is no human in the gnome’s cell. Moreover, by the assumptions of the problem, the utility functions of the gnomes are symmetric, and their decisions are also. UDT proper doesn’t apply to gnomes whose utility function is IT, because the function IT is different for each of the different gnomes, but the more general principle of linked decisions still applies due to the obvious symmetry between the gnomes’ situations, despite the differences in utility functions. Thus we assume a linked decision where either gnome recommends buying a ticket for $x.
The utility calculations are therefore E[T] = (1/4)(-x) + (1/4)(-x) + (1/2)2(1-x) = 1-(3/2)x (breakeven at 2⁄3) E[IT] = (1/4)(0) + (1/4)(-x) + (1/2)2(1-x) = 1-(5/4)x (breakeven at 4⁄5)
Thus gnomes who are indifferent when no human is present (U = IT) should precommit to a value of x=4/5, while gnomes who still care about the total $ when no human is present (U = T) should precommit to a value of x=2/3.
Note also that this is invariant under the choice of which constant value we use to represent indifference. For some constant C, the correct calculation would actually be E[IT | buy at $x] = (1/4)(C) + (1/4)(-x) + (1/2)2(1-x) = (1/4)C + 1-(5/4)x E[IT | don’t buy] = (1/4)(C) + (1/4)(0) + (1/2)(0) = (1/4)C and so the breakeven point remains at x = 4⁄5
Thanks for giving this great example. This works because in the total utilitarian case (and average utilitarian, and other more general possibilities) the payoff of one gnome depends on the action of the other, so they have to coordinate for maximum payoff. This effect doesn’t exist in any selfish case, which is what I was thinking about at the time. But this definitely shows that isomorphism can be more complicated than what I said.
The point of that is to allow me to analyse the problem without assuming the gnome example must be true. The real objections are in the subsequent points. Even if the gnomes argue something (still a big question what they argue), we still don’t have evidence that the humans should follow them.
To be blunt, this is a question you can solve. Since it’s a non-anthropic problem, though there is some danger in Beluga’ analysis, vanilla UDT is all that’s needed.
The evidence goes as follows: The gnomes are in the same situation as the humans, with the same options and the same payoffs. Although they started with different information than the humans (especially since the humans didn’t exist), at the time when they have to make the decision they have the same probabilities for payoffs given actions (although there’s a deeper point here that could bear elaboration). Therefore the right decision for the gnome is also the right decision for the human.
This sounds an awful lot like an isomorphism argument to me… What sort of standard of evidence would you say is appropriate for an isomorphism argument?
I’m convinced that this issue goes much deeper than it first seemed… I’m putting stuff together, and I’ll publish a post on it soon.
The deeper point is important, and I think you’re mistaken about the necessary and sufficient conditions for an isomorphism here.
If a human appears in a gnome’s cell, then that excludes the counterfactual world in which the human did not appear in the gnome’s cell. However, on UDT, the gnome’s decision does depend on the payoffs in that counterfactual world.
Thus, for the isomorphism argument to hold, the preferences of the human and gnome must align over counterfactual worlds as well as factual ones. It is not sufficient to have the same probabilities for payoffs given linked actions when you have to make a decision, you also have to have the same probabilities for payoffs given linked actions when you don’t have to make a decision.
Could you give a worked example of the correct action for the gnome with a human in their cell depending on the payoffs for the gnome without a human in their cell? (Assuming they know whether there’s a human in their cell, and know the three different possible sets of payoffs for the available actions—if these constraints were relaxed I think it would be clearly doable. As it is I’m doubtful.)
I already have a more detailed version here; see the different calcualtions for E[T] vs E[IT]. However, I’ll give you a short version. From the gnome’s perspective, the two different types of total utilitarian utility functions are:
T = total $ over both cells
IT = total $ over both cells if there’s a human in my cell, 0 otherwise.
and the possible outcomes are
p=1/4 for heads + no human in my cell
p=1/4 for heads + human in my cell
p=1/2 for tails + human in my cell.
As you can see, these two utility functions only differ when there is no human in the gnome’s cell. Moreover, by the assumptions of the problem, the utility functions of the gnomes are symmetric, and their decisions are also. UDT proper doesn’t apply to gnomes whose utility function is IT, because the function IT is different for each of the different gnomes, but the more general principle of linked decisions still applies due to the obvious symmetry between the gnomes’ situations, despite the differences in utility functions. Thus we assume a linked decision where either gnome recommends buying a ticket for $x.
The utility calculations are therefore
E[T] = (1/4)(-x) + (1/4)(-x) + (1/2)2(1-x) = 1-(3/2)x (breakeven at 2⁄3)
E[IT] = (1/4)(0) + (1/4)(-x) + (1/2)2(1-x) = 1-(5/4)x (breakeven at 4⁄5)
Thus gnomes who are indifferent when no human is present (U = IT) should precommit to a value of x=4/5, while gnomes who still care about the total $ when no human is present (U = T) should precommit to a value of x=2/3.
Note also that this is invariant under the choice of which constant value we use to represent indifference. For some constant C, the correct calculation would actually be
E[IT | buy at $x] = (1/4)(C) + (1/4)(-x) + (1/2)2(1-x) = (1/4)C + 1-(5/4)x
E[IT | don’t buy] = (1/4)(C) + (1/4)(0) + (1/2)(0) = (1/4)C
and so the breakeven point remains at x = 4⁄5
Thanks for giving this great example. This works because in the total utilitarian case (and average utilitarian, and other more general possibilities) the payoff of one gnome depends on the action of the other, so they have to coordinate for maximum payoff. This effect doesn’t exist in any selfish case, which is what I was thinking about at the time. But this definitely shows that isomorphism can be more complicated than what I said.