Suppose my decision algorithm for the “both boxes are transparent” case is to take only box B if and only if it is empty, and to take both boxes if and only if box B has a million dollars in it. How does Omega respond? No matter how it handles box B, it’s implied prediction will be wrong.
Box B appears full of money; however, after you take both boxes, you find that the money in Box B is Monopoly money. The money in Box A remains genuine, however.
Box B appears empty, however, on opening it you find, written on the bottom of the box, the full details of a bank account opened by Omega, containing one million dollars, together with written permission for you to access said account.
In short, even with transparent boxes, there’s a number of ways for Omega to lie to you about the contents of Box B, and in this manner control your choice. If Omega is constrained to not lie about the contents of Box B, then it gets a bit trickier; Omega can still maintain an over 90% success rate by presenting the same choice to plenty of other people with an empty box B (since most people will likely take both boxes if they know B is empty).
Or, alternatively, Omega can decide to offer you the choice at a time when Omega predicts you won’t live long enough to make it.
Perhaps just as slippery, what if my algorithm is to take only box B if and only if it contains a million dollars, and to take both boxes if and only if box B is empty? In this case, anything Omega predicts will be accurate, so what prediction does it make?
That depends; instead of making a prediction here, Omega is controlling your choice. Whether you get the million dollars or not in this case depends on whether Omega wants you to have the million dollars or not, in furtherance of whatever other plans Omega is planning.
Omega doesn’t need to predict your choice; in the transparent-box case, Omega needs to predict your decision algorithm.
“The boxes are transparent” doesn’t literally mean “light waves pass through the boxes” given the description of the problem; it means “you can determine what’s inside the boxes without (and before) opening them”.
Responding by saying “maybe you can see into the boxes but you can’t tell if the money inside is fake” is being hyper-literal and ignoring what people really mean when they specify “suppose the boxes are transparent”.
In which case, if you are determined to show that Omega’s prediction is incorrect, and Omega can predict that determination, then the only way that Omega can avoid making an incorrect prediction is either to modify you in some manner (until you are no longer determined to make Omega’s prediction incorrect), or to deny you the chance to make the choice entirely.
For example, Omega might modify you by changing your circumstances; e.g. giving a deadly disease to someone close to you; which can be cured, but only at a total cost of all the money you are able to raise plus $1000. If Omega then offers the choice (with box B empty) most people would take both boxes, in order to be able to afford the cure.
Alternatively, given such a contrary precommitment, Omega may simply never offer you the choice at all; or might offer you the choice three seconds before you get struck by lightning.
“Omega puts money inside the boxes, you just never live to get it” is as outside the original problem as “the boxes are transparent, you just don’t understand what you’re seeing when you look in them” is outside the transparent problem. Just because the premise of the problem doesn’t explicitly say ”… and you get the contents of the boxes” doesn’t mean the paradox can be resolved by saying you don’t get the contents of the boxes—that’s being hyper-literal again. Likewise, just because the problem doesn’t say ”… and Omega can’t modify you to change your choice” doesn’t mean that the paradox can be resolved by saying that Omega can modify you.to change your choice—the problem is about decision theory, and Omega doesn’t have capabilities that are irrelevant to what the problem is about.
The problem, as stated, as far as I can tell gives Omega three options:
Fail to correctly predict what the person will choose
Refuse to participate
Cheat
It is likely that Omega will try to correctly predict what the person will choose; that is, Omega will strive to ignore the first option. If Omega offers the choice to this hypothetical person in the first place, then Omega is not taking the second option.
That leaves the third option; to cheat. I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning. (Another way is to creatively misunderstand the spirit of the rules as given).
So I provided some suggestions with regard to how Omega might cheat; such as arranging that the decision is never made.
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
The point here is that the question is inconsistent. It is impossible for an Omega that can predict with high accuracy to exist, as you’ve correctly pointed out it leads to a situation where Omega must either fail to participate, refuse to participate or cheat, which are all out of bounds of the problem.
I don’t think it’s ever wise to ignore the possibility of a superintelligent AI cheating, in some manner.
If we ignore that possibility, then yes, the question would be inconsistent; which implies that if the situation were to actually appear to happen, then it would be quite likely that either:
The situation has been misunderstood; or
Someone is cheating
Since it is far easier for Omega, being an insane superintelligence, to cheat than it is for someone to cheat Omega, it seems likeliest that if anyone is cheating, then it is Omega.
After all, Omega had and did not take the option to refuse to participate.
I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning.
The constraints aren’t constraints on Omega; the constraints are constraints on the reader—they tell the reader what he is supposed to use as the premises of the scenario. Omega cannot cheat unless the reader interprets the description of the problem to mean that Omega is willing to cheat. And if the reader does interpret it that way, it’s the reader, not Omega, who’s violating the spirit of the constraints and being hyper-literal.
what do you think Omega would do?
I think that depending on the human’s intentions, and assuming the human is a perfect reasoner, the conditions of the problem are contradictory. Omega can’t always predict the human—it’s logically impossible.
Box B appears full of money; however, after you take both boxes, you find that the money in Box B is Monopoly money. The money in Box A remains genuine, however.
Box B appears empty, however, on opening it you find, written on the bottom of the box, the full details of a bank account opened by Omega, containing one million dollars, together with written permission for you to access said account.
In short, even with transparent boxes, there’s a number of ways for Omega to lie to you about the contents of Box B, and in this manner control your choice. If Omega is constrained to not lie about the contents of Box B, then it gets a bit trickier; Omega can still maintain an over 90% success rate by presenting the same choice to plenty of other people with an empty box B (since most people will likely take both boxes if they know B is empty).
Or, alternatively, Omega can decide to offer you the choice at a time when Omega predicts you won’t live long enough to make it.
That depends; instead of making a prediction here, Omega is controlling your choice. Whether you get the million dollars or not in this case depends on whether Omega wants you to have the million dollars or not, in furtherance of whatever other plans Omega is planning.
Omega doesn’t need to predict your choice; in the transparent-box case, Omega needs to predict your decision algorithm.
“The boxes are transparent” doesn’t literally mean “light waves pass through the boxes” given the description of the problem; it means “you can determine what’s inside the boxes without (and before) opening them”.
Responding by saying “maybe you can see into the boxes but you can’t tell if the money inside is fake” is being hyper-literal and ignoring what people really mean when they specify “suppose the boxes are transparent”.
Fair enough. I am at times overly literal.
In which case, if you are determined to show that Omega’s prediction is incorrect, and Omega can predict that determination, then the only way that Omega can avoid making an incorrect prediction is either to modify you in some manner (until you are no longer determined to make Omega’s prediction incorrect), or to deny you the chance to make the choice entirely.
For example, Omega might modify you by changing your circumstances; e.g. giving a deadly disease to someone close to you; which can be cured, but only at a total cost of all the money you are able to raise plus $1000. If Omega then offers the choice (with box B empty) most people would take both boxes, in order to be able to afford the cure.
Alternatively, given such a contrary precommitment, Omega may simply never offer you the choice at all; or might offer you the choice three seconds before you get struck by lightning.
“Omega puts money inside the boxes, you just never live to get it” is as outside the original problem as “the boxes are transparent, you just don’t understand what you’re seeing when you look in them” is outside the transparent problem. Just because the premise of the problem doesn’t explicitly say ”… and you get the contents of the boxes” doesn’t mean the paradox can be resolved by saying you don’t get the contents of the boxes—that’s being hyper-literal again. Likewise, just because the problem doesn’t say ”… and Omega can’t modify you to change your choice” doesn’t mean that the paradox can be resolved by saying that Omega can modify you.to change your choice—the problem is about decision theory, and Omega doesn’t have capabilities that are irrelevant to what the problem is about.
The problem, as stated, as far as I can tell gives Omega three options:
Fail to correctly predict what the person will choose
Refuse to participate
Cheat
It is likely that Omega will try to correctly predict what the person will choose; that is, Omega will strive to ignore the first option. If Omega offers the choice to this hypothetical person in the first place, then Omega is not taking the second option.
That leaves the third option; to cheat. I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning. (Another way is to creatively misunderstand the spirit of the rules as given).
So I provided some suggestions with regard to how Omega might cheat; such as arranging that the decision is never made.
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
The point here is that the question is inconsistent. It is impossible for an Omega that can predict with high accuracy to exist, as you’ve correctly pointed out it leads to a situation where Omega must either fail to participate, refuse to participate or cheat, which are all out of bounds of the problem.
I don’t think it’s ever wise to ignore the possibility of a superintelligent AI cheating, in some manner.
If we ignore that possibility, then yes, the question would be inconsistent; which implies that if the situation were to actually appear to happen, then it would be quite likely that either:
The situation has been misunderstood; or
Someone is cheating
Since it is far easier for Omega, being an insane superintelligence, to cheat than it is for someone to cheat Omega, it seems likeliest that if anyone is cheating, then it is Omega.
After all, Omega had and did not take the option to refuse to participate.
The constraints aren’t constraints on Omega; the constraints are constraints on the reader—they tell the reader what he is supposed to use as the premises of the scenario. Omega cannot cheat unless the reader interprets the description of the problem to mean that Omega is willing to cheat. And if the reader does interpret it that way, it’s the reader, not Omega, who’s violating the spirit of the constraints and being hyper-literal.
I think that depending on the human’s intentions, and assuming the human is a perfect reasoner, the conditions of the problem are contradictory. Omega can’t always predict the human—it’s logically impossible.