Ah! So I need to assign priors to three hypotheses. (1) Omega is a magician (i.e. illusion artist) (2) Omega had bribed people to lie about his past success. (3) He is what he claims.
So I assign a prior of zero probability to hypothesis #3, and cheerfully one-box using everyday decision theory.
You don’t seem to be entering into the spirit of the problem. You are “supposed” to reach the conclusion that there’s a good chance that Omega can predict your actions in this domain pretty well—from what he knows about you—after reading the premise of the problem.
If you think that’s not a practical possibility, then I recommend that you imagine yourself as a deterministic robot—where such a scenario becomes more believable—and then try the problem again.
If I imagine myself as a deterministic robot, who knows that he is a deterministic robot, I am no longer able to maintain the illusion that I care about this problem.
I like this version! Now the answer seems quite obvious.
In this case, I would design the robot to be a one-boxer. And I would harbour the secret hope that a stray cosmic ray will cause the robot to pick both boxes anyway.
Ah! So I need to assign priors to three hypotheses. (1) Omega is a magician (i.e. illusion artist) (2) Omega had bribed people to lie about his past success. (3) He is what he claims.
So I assign a prior of zero probability to hypothesis #3, and cheerfully one-box using everyday decision theory.
First: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/
You don’t seem to be entering into the spirit of the problem. You are “supposed” to reach the conclusion that there’s a good chance that Omega can predict your actions in this domain pretty well—from what he knows about you—after reading the premise of the problem.
If you think that’s not a practical possibility, then I recommend that you imagine yourself as a deterministic robot—where such a scenario becomes more believable—and then try the problem again.
If I imagine myself as a deterministic robot, who knows that he is a deterministic robot, I am no longer able to maintain the illusion that I care about this problem.
Do you think you aren’t a deterministic robot? Or that you are, but you don’t know it?
It is a quantum universe. So I would say that I am a stochastic robot. And Omega cannot predict my future actions.
...then you need to imagine you made the robot, it is meeting Omega on your behalf—and that it then gives you all its winnings.
I like this version! Now the answer seems quite obvious.
In this case, I would design the robot to be a one-boxer. And I would harbour the secret hope that a stray cosmic ray will cause the robot to pick both boxes anyway.
Yes—but you would still give its skull a lead-lining—and make use of redundancy to produce reliability...
Agreed.