I suppose you could be in a Newcomblike situation with your parents making a similar decision to have birthed you.
If I’d thought of ArisKatsaris’s repugnant conclusions, I probably would have used those instead of Azathoth in part 2. I’m sure there are plenty of real-world situations where one’s parents both had justifiably high confidence that you would turn out a certain way, and wouldn’t have birthed you if they thought otherwise. And in a few cases, at least, those expectations would be repugnant ones. The argument also suggests a truly marvelous hack for creating an AI that wants to fulfill its creators intentions.
That said, I’m not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order—it doesn’t know it’s a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he’s not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.
That said, I’m not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order—it doesn’t know it’s a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he’s not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.
This isn’t quite the same as Evolution, because you know you exist, which means that your parents one-boxed. This is like the selector using the most similar person who happens to be guaranteed to have chosen to one-box.
Since the predictor places money based on what the most similar person chose, and you know that the most similar person one-boxed, you know that there is $1000000 in box B regardless of what you pick, and you can feel free to take both.
I haven’t studied all the details of UDT, so I may have missed an argument for treating it as the default. (I don’t know if that affects the argument or not, since UDT seems a little more complicated than ‘always one-box’.) So far all the cases I’ve seen look like they give us reasons to switch from within ordinary utility-maximizing decision theory—for a particular case or set of cases.
Now if we find ourselves in transparent Newcomb without having made a decision, it seems too late to switch in that way. If we consider the problem beforehand, ordinary decision theory gives us reason to go with UDT iff Omega can actually predict our actions. Evolution can’t. It seems not only possible but common for humans to make choices that don’t maximize reproduction. That seems to settle the matter. Even within UDT I get the feeling that the increased utility from doing as you think best can overcome a slight theoretical decrease in chance of existing.
If evolution could predict the future as well as Omega then logically I’d have an overwhelming chance of “one-boxing”. The actual version of me would call this morally wrong, so UDT might still have a problem there. But creating an issue takes more than just considering parents who proverbially can’t predict jack.
there is no logical update on what it does after you know your own decision.
Consider Newcomb’s Dilemma with an imperfect predictor Psi. Psi will agree with Omega’s predictions 95% of the time.
P($1000000 in B | you choose to one-box) = .95
P($0 in B | you choose to two-box) = .95
Utility of one boxing: .95 1000000 + .05 0= $950,000
Utility of two boxing: .95 1000 + .05 1000000 = $50,950
Now, lets say that Psi just uses Omega’s prediction on the person most similar to you (lets call them S), but there’s a 95% chance that you disagree with that person.
P($1000000 in B | S chooses to one-box) = 1
P($0 in B | S chooses to two-box) = 1
and
P(S chooses to one-box | you choose to one-box) = .95
P(S chooses to two-box | you choose to two-box) = .95
You’ll find that this is the same as the situation with Psi, since
P($1000000 in B | you choose to one-box) = P($1000000 in B | S chooses to one-box) P(S chooses to one-box | you choose to one-box) = 1 .95 = .95.
Since the probabilities are the same, the expected utilities are the same.
Now, lets use evolution as our predictor. Evolution is unable to model you, but it does know what your parents did.
However, you are not your parents. I will be liberal though, and assume that you have a 95% chance of choosing the same thing as them.
So,
P(you one-box | your parents one-boxed) = .95
P(you two-box | your parents two-boxed) = .95
Since Evolution predicts that you’ll do the same thing as your parents,
P($1000000 in B | your parents one-boxed) = 1
P($0 in B | your parents two-boxed) = 1
This may seem similar to the previous predictor, but there’s a catch—you exist. Since you exist, and you only exist because your parents one-boxed,
Note how the fact of your existence implies that your parents one boxed. Though you are more likely to choose what your parents chose, you still have the option not to.
Calculate the probabilities:
P($1000000 in B) = P($1000000 in B | your parents one-boxed) P(your parents one-boxed) + P($1000000 | your parents two-boxed) P(your parents two-boxed)= 1 1 0 0= 1
and P($0 in B) = 0
Since you exist, you know that your parents one-boxed. Since they one-boxed, you know that Evolution thinks you will one-box. Since Evolution thinks you’ll one box, there will be $1000000 in box B. Most people will in fact one-box (just in this model), just because of that 95% chance that they agree with their parents thing, but the 5% who two box get away with an extra $1000.
So basically, once I exist I know I exist, and Evolution can’t take that away from me.
Also, please feel free to point out errors in my math, its late over here and I probably made some.
If I’d thought of ArisKatsaris’s repugnant conclusions, I probably would have used those instead of Azathoth in part 2. I’m sure there are plenty of real-world situations where one’s parents both had justifiably high confidence that you would turn out a certain way, and wouldn’t have birthed you if they thought otherwise. And in a few cases, at least, those expectations would be repugnant ones. The argument also suggests a truly marvelous hack for creating an AI that wants to fulfill its creators intentions.
That said, I’m not entirely convinced that changing Prometheus to Azathoth should yield different answers. We can change the Predictor in Newcomb to an evolutionary process. Omega tells you that the process has been trained using copies of every human mind that has ever existed, in chronological order—it doesn’t know it’s a predictor, but it sure acts like one. Or an overtly reference-classed-based version: Omega tells you that he’s not a predictor at all: he just picked the past player of this game who most reminds him of you, and put the million dollars in the box if-and-only-if that player one-boxed. Neither of these changes seem like they should alter the answer, as long as the difference in payouts is large enough to swamp fluctuations in the level of logical entanglement.
This isn’t quite the same as Evolution, because you know you exist, which means that your parents one-boxed. This is like the selector using the most similar person who happens to be guaranteed to have chosen to one-box.
Since the predictor places money based on what the most similar person chose, and you know that the most similar person one-boxed, you know that there is $1000000 in box B regardless of what you pick, and you can feel free to take both.
Again, that same logic would seem to lead you to two-box in any variant of transparent Newcomb.
I haven’t studied all the details of UDT, so I may have missed an argument for treating it as the default. (I don’t know if that affects the argument or not, since UDT seems a little more complicated than ‘always one-box’.) So far all the cases I’ve seen look like they give us reasons to switch from within ordinary utility-maximizing decision theory—for a particular case or set of cases.
Now if we find ourselves in transparent Newcomb without having made a decision, it seems too late to switch in that way. If we consider the problem beforehand, ordinary decision theory gives us reason to go with UDT iff Omega can actually predict our actions. Evolution can’t. It seems not only possible but common for humans to make choices that don’t maximize reproduction. That seems to settle the matter. Even within UDT I get the feeling that the increased utility from doing as you think best can overcome a slight theoretical decrease in chance of existing.
If evolution could predict the future as well as Omega then logically I’d have an overwhelming chance of “one-boxing”. The actual version of me would call this morally wrong, so UDT might still have a problem there. But creating an issue takes more than just considering parents who proverbially can’t predict jack.
Consider Newcomb’s Dilemma with an imperfect predictor Psi. Psi will agree with Omega’s predictions 95% of the time.
P($1000000 in B | you choose to one-box) = .95
P($0 in B | you choose to two-box) = .95
Utility of one boxing: .95 1000000 + .05 0= $950,000
Utility of two boxing: .95 1000 + .05 1000000 = $50,950
Now, lets say that Psi just uses Omega’s prediction on the person most similar to you (lets call them S), but there’s a 95% chance that you disagree with that person.
P($1000000 in B | S chooses to one-box) = 1
P($0 in B | S chooses to two-box) = 1
and
P(S chooses to one-box | you choose to one-box) = .95
P(S chooses to two-box | you choose to two-box) = .95
You’ll find that this is the same as the situation with Psi, since P($1000000 in B | you choose to one-box) = P($1000000 in B | S chooses to one-box) P(S chooses to one-box | you choose to one-box) = 1 .95 = .95.
Since the probabilities are the same, the expected utilities are the same.
Now, lets use evolution as our predictor. Evolution is unable to model you, but it does know what your parents did.
However, you are not your parents. I will be liberal though, and assume that you have a 95% chance of choosing the same thing as them.
So,
P(you one-box | your parents one-boxed) = .95
P(you two-box | your parents two-boxed) = .95
Since Evolution predicts that you’ll do the same thing as your parents,
P($1000000 in B | your parents one-boxed) = 1
P($0 in B | your parents two-boxed) = 1
This may seem similar to the previous predictor, but there’s a catch—you exist. Since you exist, and you only exist because your parents one-boxed,
P(your parents one-boxed | you exist) = 1
P(your parents one-boxed) = 1 and P(your parents two-boxed) = 0.
Note how the fact of your existence implies that your parents one boxed. Though you are more likely to choose what your parents chose, you still have the option not to.
Calculate the probabilities:
P($1000000 in B) = P($1000000 in B | your parents one-boxed) P(your parents one-boxed) + P($1000000 | your parents two-boxed) P(your parents two-boxed)= 1 1 0 0= 1
and P($0 in B) = 0
Since you exist, you know that your parents one-boxed. Since they one-boxed, you know that Evolution thinks you will one-box. Since Evolution thinks you’ll one box, there will be $1000000 in box B. Most people will in fact one-box (just in this model), just because of that 95% chance that they agree with their parents thing, but the 5% who two box get away with an extra $1000.
So basically, once I exist I know I exist, and Evolution can’t take that away from me.
Also, please feel free to point out errors in my math, its late over here and I probably made some.