I don’t see anything wrong with what you’re saying, but if you did that you’d end up not being an indexically selfish person anymore. You’d be selfish in a different, perhaps alien or counterintuitive way. So you might be reluctant to make that kind of commitment until you’ve thought about it for a much longer time, and UDT isn’t compatible with your values in the meantime. Also, without futuristic self-modification technologies, you are probably not able to make such a commitment truly binding even if you wanted to and you tried.
It seems that in many simple worlds (such as the Bomb world), an indexically-selfish agent with a utility function u over centered histories would prefer to commit to UDT with a utility function u′ over uncentered histories; where u′ is defined as the sum of all the “uncentered versions” of u (version i corresponds to u when the pointer is assumed to point to agent i).
Things seem to get more confusing in messy worlds in which the inability of an agent to define a utility function (over uncentered histories) that distinguishes between agent1 and agent2 does not entail that the two agents are about to make the same decision.
I don’t see anything wrong with what you’re saying, but if you did that you’d end up not being an indexically selfish person anymore. You’d be selfish in a different, perhaps alien or counterintuitive way. So you might be reluctant to make that kind of commitment until you’ve thought about it for a much longer time, and UDT isn’t compatible with your values in the meantime. Also, without futuristic self-modification technologies, you are probably not able to make such a commitment truly binding even if you wanted to and you tried.
Some tangentially related thoughts:
It seems that in many simple worlds (such as the Bomb world), an indexically-selfish agent with a utility function u over centered histories would prefer to commit to UDT with a utility function u′ over uncentered histories; where u′ is defined as the sum of all the “uncentered versions” of u (version i corresponds to u when the pointer is assumed to point to agent i).
Things seem to get more confusing in messy worlds in which the inability of an agent to define a utility function (over uncentered histories) that distinguishes between agent1 and agent2 does not entail that the two agents are about to make the same decision.