“Omega puts money inside the boxes, you just never live to get it” is as outside the original problem as “the boxes are transparent, you just don’t understand what you’re seeing when you look in them” is outside the transparent problem. Just because the premise of the problem doesn’t explicitly say ”… and you get the contents of the boxes” doesn’t mean the paradox can be resolved by saying you don’t get the contents of the boxes—that’s being hyper-literal again. Likewise, just because the problem doesn’t say ”… and Omega can’t modify you to change your choice” doesn’t mean that the paradox can be resolved by saying that Omega can modify you.to change your choice—the problem is about decision theory, and Omega doesn’t have capabilities that are irrelevant to what the problem is about.
The problem, as stated, as far as I can tell gives Omega three options:
Fail to correctly predict what the person will choose
Refuse to participate
Cheat
It is likely that Omega will try to correctly predict what the person will choose; that is, Omega will strive to ignore the first option. If Omega offers the choice to this hypothetical person in the first place, then Omega is not taking the second option.
That leaves the third option; to cheat. I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning. (Another way is to creatively misunderstand the spirit of the rules as given).
So I provided some suggestions with regard to how Omega might cheat; such as arranging that the decision is never made.
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
The point here is that the question is inconsistent. It is impossible for an Omega that can predict with high accuracy to exist, as you’ve correctly pointed out it leads to a situation where Omega must either fail to participate, refuse to participate or cheat, which are all out of bounds of the problem.
I don’t think it’s ever wise to ignore the possibility of a superintelligent AI cheating, in some manner.
If we ignore that possibility, then yes, the question would be inconsistent; which implies that if the situation were to actually appear to happen, then it would be quite likely that either:
The situation has been misunderstood; or
Someone is cheating
Since it is far easier for Omega, being an insane superintelligence, to cheat than it is for someone to cheat Omega, it seems likeliest that if anyone is cheating, then it is Omega.
After all, Omega had and did not take the option to refuse to participate.
I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning.
The constraints aren’t constraints on Omega; the constraints are constraints on the reader—they tell the reader what he is supposed to use as the premises of the scenario. Omega cannot cheat unless the reader interprets the description of the problem to mean that Omega is willing to cheat. And if the reader does interpret it that way, it’s the reader, not Omega, who’s violating the spirit of the constraints and being hyper-literal.
what do you think Omega would do?
I think that depending on the human’s intentions, and assuming the human is a perfect reasoner, the conditions of the problem are contradictory. Omega can’t always predict the human—it’s logically impossible.
“Omega puts money inside the boxes, you just never live to get it” is as outside the original problem as “the boxes are transparent, you just don’t understand what you’re seeing when you look in them” is outside the transparent problem. Just because the premise of the problem doesn’t explicitly say ”… and you get the contents of the boxes” doesn’t mean the paradox can be resolved by saying you don’t get the contents of the boxes—that’s being hyper-literal again. Likewise, just because the problem doesn’t say ”… and Omega can’t modify you to change your choice” doesn’t mean that the paradox can be resolved by saying that Omega can modify you.to change your choice—the problem is about decision theory, and Omega doesn’t have capabilities that are irrelevant to what the problem is about.
The problem, as stated, as far as I can tell gives Omega three options:
Fail to correctly predict what the person will choose
Refuse to participate
Cheat
It is likely that Omega will try to correctly predict what the person will choose; that is, Omega will strive to ignore the first option. If Omega offers the choice to this hypothetical person in the first place, then Omega is not taking the second option.
That leaves the third option; to cheat. I expect that this is the choice that Omega will be most likely to take; one of the easiest ways to do this is by ignoring the spirit of the constraints and taking the exact literal meaning. (Another way is to creatively misunderstand the spirit of the rules as given).
So I provided some suggestions with regard to how Omega might cheat; such as arranging that the decision is never made.
If you think that’s outside the problem, then I’m curious; what do you think Omega would do?
The point here is that the question is inconsistent. It is impossible for an Omega that can predict with high accuracy to exist, as you’ve correctly pointed out it leads to a situation where Omega must either fail to participate, refuse to participate or cheat, which are all out of bounds of the problem.
I don’t think it’s ever wise to ignore the possibility of a superintelligent AI cheating, in some manner.
If we ignore that possibility, then yes, the question would be inconsistent; which implies that if the situation were to actually appear to happen, then it would be quite likely that either:
The situation has been misunderstood; or
Someone is cheating
Since it is far easier for Omega, being an insane superintelligence, to cheat than it is for someone to cheat Omega, it seems likeliest that if anyone is cheating, then it is Omega.
After all, Omega had and did not take the option to refuse to participate.
The constraints aren’t constraints on Omega; the constraints are constraints on the reader—they tell the reader what he is supposed to use as the premises of the scenario. Omega cannot cheat unless the reader interprets the description of the problem to mean that Omega is willing to cheat. And if the reader does interpret it that way, it’s the reader, not Omega, who’s violating the spirit of the constraints and being hyper-literal.
I think that depending on the human’s intentions, and assuming the human is a perfect reasoner, the conditions of the problem are contradictory. Omega can’t always predict the human—it’s logically impossible.