Yeah, and then you kept stipulating that the model where Omega has read the action tape and then put or not put money into the box, but it didn’t leak onto sensory input, is very unlikely, and I noted that it’s stipulated in the problem statement that the box contents do not leak onto sensory input.
The two situations are quite different. Any complexity penalty for the non-leaking box has already been paid via AIXI’s observations of the box and the whole Newcomb’s setup; the opaqueness of the box just boils down to normal reality.
On the other hand, your “action bit” model in which Omega reads AIXI’s action tape is associated with a significant complexity penalty because of the privileged nature of the situation—why specifically Omega, and not anyone else? Why does Omega specifically access that one bit, and not one of the other bits?
The more physics-like and real AIXI’s Turing machines get, the more of a penalty will be associated with Turing machines that need to incorporate a special case for a specific event.
Let’s say AIXI lives inside the robot named Alice. According to every model employed by AIXI, the robot named Alice has pre-committed, since the beginning of time, to act out a specific sequence of actions. How the hell that assumes magical free will I don’t know.
edit: and note that you can exclude machines which had read the action before printing matching sensory data, to actually ensure magical free will. I’m not even sure, maybe some variations by Hutter do just that.
AIXI as defined by Hutter (not just some “variation”) has a foundational assumption that an action at time t cannot influence AIXI’s perceptions at times 1..t-1. This is entirely incompatible with a model of Alice where she has pre-commited since the beginning of time, because such an Alice would be able to discover her own pre-commitment before she took the action in question. AIXI, on the other hand, explicitly forbids world models where that can happen.
That’s just abstruse. We both know what I mean.
No, I don’t. My point is that although you can’t predict AIXI in the general case, there are still many cases where AIXI can be predicted with relative ease. My argument is still that Newcomb’s problem is one of those cases (and that AIXI two-boxes).
As for all of your scenarios with different Omegas or different amounts of money, obviously a major factor is how accurate I think Omega’s predictions are. If ze has only been wrong one time in a million, and this includes people who have been one-boxing as well, why should I spend much time thinking about the possibility that I could be the one time he gets it wrong?
Similarly, if you’re playing Omega and you don’t have a past history of correctly predicting one-boxing vs one-boxers, then yes, I two-box. However, that scenario isn’t Newcomb’s problem. For it to be Newcomb’s problem, Omega has to have a history of correctly predicting one-boxers as well as two-boxers.
The two situations are quite different. Any complexity penalty for the non-leaking box has already been paid via AIXI’s observations of the box and the whole Newcomb’s setup; the opaqueness of the box just boils down to normal reality. On the other hand, your “action bit” model in which Omega reads AIXI’s action tape is associated with a significant complexity penalty because of the privileged nature of the situation—why specifically Omega, and not anyone else? Why does Omega specifically access that one bit, and not one of the other bits? The more physics-like and real AIXI’s Turing machines get, the more of a penalty will be associated with Turing machines that need to incorporate a special case for a specific event.
AIXI as defined by Hutter (not just some “variation”) has a foundational assumption that an action at time t cannot influence AIXI’s perceptions at times 1..t-1. This is entirely incompatible with a model of Alice where she has pre-commited since the beginning of time, because such an Alice would be able to discover her own pre-commitment before she took the action in question. AIXI, on the other hand, explicitly forbids world models where that can happen.
No, I don’t. My point is that although you can’t predict AIXI in the general case, there are still many cases where AIXI can be predicted with relative ease. My argument is still that Newcomb’s problem is one of those cases (and that AIXI two-boxes).
As for all of your scenarios with different Omegas or different amounts of money, obviously a major factor is how accurate I think Omega’s predictions are. If ze has only been wrong one time in a million, and this includes people who have been one-boxing as well, why should I spend much time thinking about the possibility that I could be the one time he gets it wrong?
Similarly, if you’re playing Omega and you don’t have a past history of correctly predicting one-boxing vs one-boxers, then yes, I two-box. However, that scenario isn’t Newcomb’s problem. For it to be Newcomb’s problem, Omega has to have a history of correctly predicting one-boxers as well as two-boxers.