If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.
Instead, let’s now make the gnome in the head world hate the other human, if they don’t have one themselves. The result of this is that they will agree to any x<$1, as they are (initially) indifferent to what happens in the heads world (potential gains, if they are the gnome with a human, as cancelled out by the potential loss, if they are the gnome without the human).
What this shows is that “Conditional on me existing, the gnome’s utility function coincides with mine” is not a sufficient condition for “I should follow the advice that the gnome would have precommited to give”.
What I propose is instead: “If conditional on me existing the gnome’s utility function coincides with mine, and conditional on me not existing the gnome’s utility function is a constant, then I should follow the advice that the gnome would have precommited to.”
ETA: Speaking of indexicality-dependent utility functions here. For lexicality-independent utility functions, such as total or average utilitarianism, the principle simplifies to: “If the gnome’s utility function coincides with mine, then I should follow the advice that the gnome would have precommited to.”
I elaborated on this difference here. However, I don’t think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or “selfishness plus hating the other person if another person exists”, while with lexicality-independent utility functions I meant total and average utilitarianism.
If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.
What this shows is that “Conditional on me existing, the gnome’s utility function coincides with mine” is not a sufficient condition for “I should follow the advice that the gnome would have precommited to give”.
What I propose is instead: “If conditional on me existing the gnome’s utility function coincides with mine, and conditional on me not existing the gnome’s utility function is a constant, then I should follow the advice that the gnome would have precommited to.”
ETA: Speaking of indexicality-dependent utility functions here. For lexicality-independent utility functions, such as total or average utilitarianism, the principle simplifies to: “If the gnome’s utility function coincides with mine, then I should follow the advice that the gnome would have precommited to.”
I’m still not clear why lexicality-independent utility functions are different from their equivalent indexical versions.
I elaborated on this difference here. However, I don’t think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or “selfishness plus hating the other person if another person exists”, while with lexicality-independent utility functions I meant total and average utilitarianism.