That said, I’m not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order—it doesn’t know it’s a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he’s not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.
This isn’t quite the same as Evolution, because you know you exist, which means that your parents one-boxed. This is like the selector using the most similar person who happens to be guaranteed to have chosen to one-box.
Since the predictor places money based on what the most similar person chose, and you know that the most similar person one-boxed, you know that there is $1000000 in box B regardless of what you pick, and you can feel free to take both.
I haven’t studied all the details of UDT, so I may have missed an argument for treating it as the default. (I don’t know if that affects the argument or not, since UDT seems a little more complicated than ‘always one-box’.) So far all the cases I’ve seen look like they give us reasons to switch from within ordinary utility-maximizing decision theory—for a particular case or set of cases.
Now if we find ourselves in transparent Newcomb without having made a decision, it seems too late to switch in that way. If we consider the problem beforehand, ordinary decision theory gives us reason to go with UDT iff Omega can actually predict our actions. Evolution can’t. It seems not only possible but common for humans to make choices that don’t maximize reproduction. That seems to settle the matter. Even within UDT I get the feeling that the increased utility from doing as you think best can overcome a slight theoretical decrease in chance of existing.
If evolution could predict the future as well as Omega then logically I’d have an overwhelming chance of “one-boxing”. The actual version of me would call this morally wrong, so UDT might still have a problem there. But creating an issue takes more than just considering parents who proverbially can’t predict jack.
This isn’t quite the same as Evolution, because you know you exist, which means that your parents one-boxed. This is like the selector using the most similar person who happens to be guaranteed to have chosen to one-box.
Since the predictor places money based on what the most similar person chose, and you know that the most similar person one-boxed, you know that there is $1000000 in box B regardless of what you pick, and you can feel free to take both.
Again, that same logic would seem to lead you to two-box in any variant of transparent Newcomb.
I haven’t studied all the details of UDT, so I may have missed an argument for treating it as the default. (I don’t know if that affects the argument or not, since UDT seems a little more complicated than ‘always one-box’.) So far all the cases I’ve seen look like they give us reasons to switch from within ordinary utility-maximizing decision theory—for a particular case or set of cases.
Now if we find ourselves in transparent Newcomb without having made a decision, it seems too late to switch in that way. If we consider the problem beforehand, ordinary decision theory gives us reason to go with UDT iff Omega can actually predict our actions. Evolution can’t. It seems not only possible but common for humans to make choices that don’t maximize reproduction. That seems to settle the matter. Even within UDT I get the feeling that the increased utility from doing as you think best can overcome a slight theoretical decrease in chance of existing.
If evolution could predict the future as well as Omega then logically I’d have an overwhelming chance of “one-boxing”. The actual version of me would call this morally wrong, so UDT might still have a problem there. But creating an issue takes more than just considering parents who proverbially can’t predict jack.