I guess I’m missing something obvious. The problem seems very simple, even for an AI.
The way the problem is usually defined (omega really is omniscient, he’s not fooling you around, etc.) there are only two solutions:
You take the two boxes, and Omega had already predicted that, meaning that there is nothing in Box B—you win 1000$
You take box B only, and Omega had already predicted that, meaning that there is 1M$ in box B—you win 1M$.
That’s it. Period. Nothing else. Nada. Rien. Nichts. Sod all. These are the only two possible options (again, assuming the hypotheses are true). The decision to take box B only is a simple outcome comparison. It is a perfectly rational decision (if you accept the premises of the game).
Now the way Eliezer states it is different from the usual formulation. In Eliezer’s version, you cannot be sure about Omega’s absolute accuracy. All you know is his previous record. That does complicate things, if only because you might be the victim of a scam (e.g. like the well-known trick to convince comeone that you can consistently predict the winning horse in a 2-horse race—simply start with 2^N people, always give a different prediction to each half of them, discard those to whom you gave the wrong one, etc.)
At any rate, the other two outcomes that were impossible in the previous version (involving mis-prediction by Omega) are now possible, with a certain probability that you need to somehow ascertain. That may be difficult, but I don’t see any logical paradox.
For example, if this happened in the real world, you might reason that the probability that you are being scammed is overwhelming in regard to the probability of existence of a truly omniscient predictor. This is a reasonable inference from the fact that we hear about scams every day, but nobody has ever reported such an omniscient predictor. So you would take both boxes and enjoy your expected $1000+epsilon (Omega may have been sincere but deluded, lucky in the previous 100 trials, and wrong in this one).
In the end, the guy who would win most (in expected value!) would not be the “least rational”, but simply the one who made the best estimates for the probabilites of each outcome, based on his own knowledge of the universe (if you have a direct phone line to the Angel Gabriel, you will clearly do better).
What is the part that would be conceptually (as opposed to technically/practically) difficult for an algorithm?
I guess I’m missing something obvious. The problem seems very simple, even for an AI.
The way the problem is usually defined (omega really is omniscient, he’s not fooling you around, etc.) there are only two solutions:
You take the two boxes, and Omega had already predicted that, meaning that there is nothing in Box B—you win 1000$
You take box B only, and Omega had already predicted that, meaning that there is 1M$ in box B—you win 1M$.
That’s it. Period. Nothing else. Nada. Rien. Nichts. Sod all. These are the only two possible options (again, assuming the hypotheses are true). The decision to take box B only is a simple outcome comparison. It is a perfectly rational decision (if you accept the premises of the game).
Now the way Eliezer states it is different from the usual formulation. In Eliezer’s version, you cannot be sure about Omega’s absolute accuracy. All you know is his previous record. That does complicate things, if only because you might be the victim of a scam (e.g. like the well-known trick to convince comeone that you can consistently predict the winning horse in a 2-horse race—simply start with 2^N people, always give a different prediction to each half of them, discard those to whom you gave the wrong one, etc.)
At any rate, the other two outcomes that were impossible in the previous version (involving mis-prediction by Omega) are now possible, with a certain probability that you need to somehow ascertain. That may be difficult, but I don’t see any logical paradox.
For example, if this happened in the real world, you might reason that the probability that you are being scammed is overwhelming in regard to the probability of existence of a truly omniscient predictor. This is a reasonable inference from the fact that we hear about scams every day, but nobody has ever reported such an omniscient predictor. So you would take both boxes and enjoy your expected $1000+epsilon (Omega may have been sincere but deluded, lucky in the previous 100 trials, and wrong in this one).
In the end, the guy who would win most (in expected value!) would not be the “least rational”, but simply the one who made the best estimates for the probabilites of each outcome, based on his own knowledge of the universe (if you have a direct phone line to the Angel Gabriel, you will clearly do better).
What is the part that would be conceptually (as opposed to technically/practically) difficult for an algorithm?