The “me and my copies” that this agent bases its utility on are split across possible worlds with different outcomes. EDT requires a function that maps an action and an outcome to a utility value, and no such function exists for this agent.
Edit: as an example, what is the utility of this agent winning $1000 in a game where they don’t know the chance of winning? They don’t even know themselves what their own utility is, because their utility doesn’t just depend upon the outcome. If you credibly tell them afterward that they were nearly certain to win, they value the same $1000 very much greater than if you tell them that there was a 1 in a million chance that they would win.
For this sort of agent that values nonexistent and causally-disconnected people, we need some different class of decision theory altogether, and I’m not sure it can even be made rationally consistent.
The “me and my copies” that this agent bases its utility on are split across possible worlds with different outcomes. EDT requires a function that maps an action and an outcome to a utility value, and no such function exists for this agent.
Edit: as an example, what is the utility of this agent winning $1000 in a game where they don’t know the chance of winning? They don’t even know themselves what their own utility is, because their utility doesn’t just depend upon the outcome. If you credibly tell them afterward that they were nearly certain to win, they value the same $1000 very much greater than if you tell them that there was a 1 in a million chance that they would win.
For this sort of agent that values nonexistent and causally-disconnected people, we need some different class of decision theory altogether, and I’m not sure it can even be made rationally consistent.