I don’t think that works, because 1) isn’t actually satisfied. The selfish human in cell B is indifferent over worlds where that same human doesn’t exist, but the gnome is not indifferent.
Consequently, I think that as one of the humans in your “closest human” case you shouldn’t follow the gnome’s advice, because the gnome’s recommendation is being influenced by a priori possible worlds that you don’t care about at all. This is the same reason a human with utility function T shouldn’t follow the gnome recommendation of 4⁄5 from a gnome with utility function IT. Even though these recommendations are correct for the gnomes, they aren’t correct for the humans.
As for the “same reasons” comment, I think that doesn’t hold up either. The decisions in all of the cases are linked decisions, even in the simple case of U = S above. The difference in the S case is simply that the linked nature of the decision turns out to be irrelevant, because the other gnome’s decision has no effect on the first gnome’s utility. I would argue that the gnomes in all of the cases we’ve put forth have always had the “same reasons” in the sense that they’ve always been using the same decision algorithm, albeit with different utility functions.
Let’s ditch the gnomes, they are contributing little to this argument.
My average ut=selfish argument was based on the fact that if you changed the utility of everyone who existed from one system to the other, then people’s utilities would be the same, given that they existed.
The argument here is that if you changed the utility of everyone from one system to the other, then this would affect their counterfactual utility in the worlds where they don’t exist.
I don’t think that works, because 1) isn’t actually satisfied. The selfish human in cell B is indifferent over worlds where that same human doesn’t exist, but the gnome is not indifferent.
Consequently, I think that as one of the humans in your “closest human” case you shouldn’t follow the gnome’s advice, because the gnome’s recommendation is being influenced by a priori possible worlds that you don’t care about at all. This is the same reason a human with utility function T shouldn’t follow the gnome recommendation of 4⁄5 from a gnome with utility function IT. Even though these recommendations are correct for the gnomes, they aren’t correct for the humans.
As for the “same reasons” comment, I think that doesn’t hold up either. The decisions in all of the cases are linked decisions, even in the simple case of U = S above. The difference in the S case is simply that the linked nature of the decision turns out to be irrelevant, because the other gnome’s decision has no effect on the first gnome’s utility. I would argue that the gnomes in all of the cases we’ve put forth have always had the “same reasons” in the sense that they’ve always been using the same decision algorithm, albeit with different utility functions.
Let’s ditch the gnomes, they are contributing little to this argument.
My average ut=selfish argument was based on the fact that if you changed the utility of everyone who existed from one system to the other, then people’s utilities would be the same, given that they existed.
The argument here is that if you changed the utility of everyone from one system to the other, then this would affect their counterfactual utility in the worlds where they don’t exist.
That seems… interesting. I’ll reflect further.
Yep, I think that’s a good summary. UDT-like reasoning depends on the utility values of counterfactual worlds, not just real ones.
I’m starting to think this is another version of the problem of personal identity… But I want to be thorough before posting anything more.
I think I’m starting to see the argument...