I really don’t see what the problem is. Clearly, the being has “read your mind” and knows what you will do. If you are of the opinion to take both boxes, he knows that from his mind scan, and you are playing right into his hands.
Obviously, your decision cannot affect the outcome because it’s already been decided what’s in the box, but your BRAIN affected what he put in the box.
It’s like me handing you an opaque box and telling you there is $1 million in it if and only if you go and commit murder. Then, you open the box and find it empty. I then offer Hannibal Lecter the same deal, he commits murder, and then opens the box and finds $1 million. Amazing? I don’t think so. I was simply able to create an accurate psychological profile of the two of you.
The question is how to create a formal decision algorithm that will be able to understand the problem and give the right answer (without failing on other such tests). Of course you can solve it correctly if you are not yet poisoned by too much presumptuous philosophy.
I guess I’m missing something obvious. The problem seems very simple, even for an AI.
The way the problem is usually defined (omega really is omniscient, he’s not fooling you around, etc.) there are only two solutions:
You take the two boxes, and Omega had already predicted that, meaning that there is nothing in Box B—you win 1000$
You take box B only, and Omega had already predicted that, meaning that there is 1M$ in box B—you win 1M$.
That’s it. Period. Nothing else. Nada. Rien. Nichts. Sod all. These are the only two possible options (again, assuming the hypotheses are true). The decision to take box B only is a simple outcome comparison. It is a perfectly rational decision (if you accept the premises of the game).
Now the way Eliezer states it is different from the usual formulation. In Eliezer’s version, you cannot be sure about Omega’s absolute accuracy. All you know is his previous record. That does complicate things, if only because you might be the victim of a scam (e.g. like the well-known trick to convince comeone that you can consistently predict the winning horse in a 2-horse race—simply start with 2^N people, always give a different prediction to each half of them, discard those to whom you gave the wrong one, etc.)
At any rate, the other two outcomes that were impossible in the previous version (involving mis-prediction by Omega) are now possible, with a certain probability that you need to somehow ascertain. That may be difficult, but I don’t see any logical paradox.
For example, if this happened in the real world, you might reason that the probability that you are being scammed is overwhelming in regard to the probability of existence of a truly omniscient predictor. This is a reasonable inference from the fact that we hear about scams every day, but nobody has ever reported such an omniscient predictor. So you would take both boxes and enjoy your expected $1000+epsilon (Omega may have been sincere but deluded, lucky in the previous 100 trials, and wrong in this one).
In the end, the guy who would win most (in expected value!) would not be the “least rational”, but simply the one who made the best estimates for the probabilites of each outcome, based on his own knowledge of the universe (if you have a direct phone line to the Angel Gabriel, you will clearly do better).
What is the part that would be conceptually (as opposed to technically/practically) difficult for an algorithm?
I really don’t see what the problem is. Clearly, the being has “read your mind” and knows what you will do. If you are of the opinion to take both boxes, he knows that from his mind scan, and you are playing right into his hands.
Obviously, your decision cannot affect the outcome because it’s already been decided what’s in the box, but your BRAIN affected what he put in the box.
It’s like me handing you an opaque box and telling you there is $1 million in it if and only if you go and commit murder. Then, you open the box and find it empty. I then offer Hannibal Lecter the same deal, he commits murder, and then opens the box and finds $1 million. Amazing? I don’t think so. I was simply able to create an accurate psychological profile of the two of you.
The question is how to create a formal decision algorithm that will be able to understand the problem and give the right answer (without failing on other such tests). Of course you can solve it correctly if you are not yet poisoned by too much presumptuous philosophy.
I guess I’m missing something obvious. The problem seems very simple, even for an AI.
The way the problem is usually defined (omega really is omniscient, he’s not fooling you around, etc.) there are only two solutions:
You take the two boxes, and Omega had already predicted that, meaning that there is nothing in Box B—you win 1000$
You take box B only, and Omega had already predicted that, meaning that there is 1M$ in box B—you win 1M$.
That’s it. Period. Nothing else. Nada. Rien. Nichts. Sod all. These are the only two possible options (again, assuming the hypotheses are true). The decision to take box B only is a simple outcome comparison. It is a perfectly rational decision (if you accept the premises of the game).
Now the way Eliezer states it is different from the usual formulation. In Eliezer’s version, you cannot be sure about Omega’s absolute accuracy. All you know is his previous record. That does complicate things, if only because you might be the victim of a scam (e.g. like the well-known trick to convince comeone that you can consistently predict the winning horse in a 2-horse race—simply start with 2^N people, always give a different prediction to each half of them, discard those to whom you gave the wrong one, etc.)
At any rate, the other two outcomes that were impossible in the previous version (involving mis-prediction by Omega) are now possible, with a certain probability that you need to somehow ascertain. That may be difficult, but I don’t see any logical paradox.
For example, if this happened in the real world, you might reason that the probability that you are being scammed is overwhelming in regard to the probability of existence of a truly omniscient predictor. This is a reasonable inference from the fact that we hear about scams every day, but nobody has ever reported such an omniscient predictor. So you would take both boxes and enjoy your expected $1000+epsilon (Omega may have been sincere but deluded, lucky in the previous 100 trials, and wrong in this one).
In the end, the guy who would win most (in expected value!) would not be the “least rational”, but simply the one who made the best estimates for the probabilites of each outcome, based on his own knowledge of the universe (if you have a direct phone line to the Angel Gabriel, you will clearly do better).
What is the part that would be conceptually (as opposed to technically/practically) difficult for an algorithm?