Eq 22 in the paper you linked, trace the definitions back to eq. 16, which describes Solomonoff induction.
It uses input-less programs to obtain the joint probability distribution, then it divides it by the marginal distribution to obtain the conditional probability distribution it needs.
Nope, the environment q is a chronological program; it takes AIXI’s action sequence and outputs an observation sequence, with the restriction that observations cannot be dependent upon future actions.
Basically, it is assumed that the universal Turing machine U is fed both the environment program q, and AIXI’s action sequence y, and outputs AIXI’s observation sequence x by running the program q with input y. Quoting from the paper I linked: ”Reversely, if q already is a binary string we define q(y):=U(q,y)”
In the paper I linked, see Eq. 21:
%20=%20\sum_{q:q(y_{1:k})=x_{1:k}}2%5E{-l(q)}) and the the term ) from Eq. 22.
In other words, any program q that matches AIXI’s observations to date when given AIXI’s actions to date will be part of the ensemble. In order to evaluate different future action sequences, AIXI then evaluates the different future actions it could take by feeding them to its program ensemble, and summing over different possible future rewards conditional on the environments that output those rewards.
If you tell AIXI: “Look, the transparent box contains $1,000 and the opaque box may contain $0 or $1,000,000. Do you want to take the content only of the opaque box or both?”, then AIXI will two-box, just as you would.
Clearly the scenario where there is no Omega and the content of the opaque box is independent on your action is simpler than Newcomb’s problem.
The CDT agent can correctly argue that Omega already left the million dollars out of the box when the CDT agent was presented the choice, but that doesn’t mean that it’s correct to be a CDT agent. My argument is that AIXI suffers from the same flaw, and so a different algorithm is needed.
But if you convince AIXI that it’s actually facing Newcomb’s problem, then its surviving world-programs must model the action of Omega somewhere in their “physics modules”.
Correct. My point is that AIXI’s surviving world-programs boil down to “Omega predicted I would two-box, and didn’t put the million dollars in”, but it’s the fault of the AIXI algorithm that this happens.
The simplest way of doing that is probably to assume that there is some physical variable which determines AIXI next action (remember, the world programs predict actions as well as the inputs), and Omega can observe it and use it to set the content of the opaque box. Or maybe they can assume that Omega has a time machine or something.
As per the AIXI equations, this is incorrect. AIXI cannot recognize the presence of a physical variable determining its next action because for any environment program AIXI’s evaluation stage is always going to try both the OneBox and TwoBox actions. Given the three classes of programs above, the only way AIXI can justify one-boxing is if the class (3) programs, in which its action somehow causes the contents of the box, win out.
“Reversely, if q already is a binary string we define q(y):=U(q,y)”
Ok, missed that. I don’t think it matters to the rest of the argument, though.
As per the AIXI equations, this is incorrect. AIXI cannot recognize the presence of a physical variable determining its next action because for any environment program AIXI’s evaluation stage is always going to try both the OneBox and TwoBox actions. Given the three classes of programs above, the only way AIXI can justify one-boxing is if the class (3) programs, in which its action somehow causes the contents of the box, win out.
An environment program can just assume a value for the physical variable and then abort by failing to halt if the next action doesn’t match it. Or it can assume that the physical simulation branches at time t0, when Omega prepares the box, simulate each branch it until t < t1, when the next AIXI action occurs, and then kill off the branch corresponding to the wrong action. Or, as it has already been proposed by somebody else, it could internally represent the physical world as a set of constraints and then run a constraint solver on it, without the need of performing a step-by-step chronological simulation.
So it seems that there are plenty of environment programs that can represent the action of Omega without assuming that it violates the known laws of physics. But even if it had to, what is the problem? AIXI doesn’t assume that the laws of physics forbid retro-causality.
An environment program can just assume a value for the physical variable and then abort by failing to halt if the next action doesn’t match it.
Why would AIXI come up with something like that? Any such program is clearly more complex than one that does the same thing but doesn’t fail to halt.
Or it can assume that the physical simulation branches at time t0, when Omega prepares the box, simulate each branch it until t < t1, when the next AIXI action occurs, and then kill off the branch corresponding to the wrong action.
Once again, possible but unnecessarily complex to explain AIXI’s observations.
Or, as it has already been proposed by somebody else, it could internally represent the physical world as a set of constraints and then run a constraint solver on it, without the need of performing a step-by-step chronological simulation.
Sure, but the point is that those constraints would still be physics-like in nature. Omega’s prediction accuracy is much better explained by constraints that are physics-like rather than an extra constraint that says “Omega is always right”. if you assume a constraint of the latter kind, you are still forced to explain all the other aspects of Omega—things like Omega walking, Omega speaking, and Omega thinking, or more generally Omega doing all those things that ze does. Also, if Omega isn’t always right, but is instead right only 99% of the time, then the constraint-based approach is penalized further.
So it seems that there are plenty of environment programs that can represent the action of Omega without assuming that it violates the known laws of physics. But even if it had to, what is the problem? AIXI doesn’t assume that the laws of physics forbid retro-causality.
It doesn’t assume that, no, but because it assumes that its observations cannot be affected by its future actions AIXI is still very much restricted in that regard.
My point is a simple one:
If AIXI is going to end-up one-boxing, the simplest model of Omega will be one that used its prediction method and already predicted that AIXI would one-box.
If AIXI is going to end up two-boxing, the simplest model of Omega will be one that used its prediction method and already predicted that AIXI would two-box.
However, if Omega predicted one-boxing and AIXI realized that this was the case, AIXI would still evaluate that the two-boxing action results in AIXI getting more money than the one-boxing action, which means that AIXI would two-box.
As long as Omega is capable of reaching this relatively simple logical conclusion, Omega thereby knows that a prediction of one-boxing would turn out to be wrong, and hence Omega should predict two-boxing; this will, of course, turn out to be correct.
The kinds of models you’re suggesting, with branching etc. are significantly more complex and don’t really serve to explain anything.
It doesn’t assume that, no, but because it assumes that its observations cannot be affected by its future actions AIXI is still very much restricted in that regards.
But this doesn’t matter for Newcomb’s problem, since AIXI observes the content of the opaque box only after it has made its decision.
However, if Omega predicted one-boxing and AIXI realized that this was the case, AIXI would still evaluate that the two-boxing action results in AIXI getting more money than the one-boxing action, which means that AIXI would two-box.
Which means that the epistemic model was flawed with high probability. You are insisting that the flawed model is simpler that the correct one. This may be true for certain states of evidence where AIXI is not convinced that Omega works as advertised, but you haven’t shown that this is true for all possible states of evidence.
The kinds of models you’re suggesting, with branching etc. are significantly more complex and don’t really serve to explain anything.
They may be more complex only up to a small constant overhead (how many bits does it take to include a condition “if OmegaPrediction != NextAction then loop forever”?), therefore, a constant amount of evidence should be sufficient to select them.
Which means that the epistemic model was flawed with high probability.
You are insisting that the flawed model is simpler that the correct one. This may be true for certain states of evidence where AIXI is not convinced that Omega works as advertised, but you haven’t shown that this is true for all possible states of evidence.
Yes, AIXI’s epistemic model will be flawed. This is necessarily true because AIXI is not capable of coming up with the true model of Newcomb’s problem, which is one in which its action and Omega’s prediction of its action share a common cause. Since AIXI isn’t capable of having a self-model, the only way it could possibly replicate the behaviour of the true model is by inserting retrocausality and/or magic into its environment.
They may be more complex only up to a small constant overhead (how many bits does it take to include a condition “if OmegaPrediction != NextAction then loop forever”?), therefore, a constant amount of evidence should be sufficient to select them.
I’m not even sure AIXI is capable of considering programs of this kind, but even if it is, what kind of evidence can AIXI have received that would justify the condition “if OmegaPrediction != NextAction then loop forever”? What evidence would justify such a model over a strictly simpler version without that condition?
Essentially, you’re arguing that rather than coming up with a correct model of its environment (e.g. one in which Omega makes a prediction on the basis of the AIXI equation), AIXI will somehow make up for its inability to self-model by coming up with an inaccurate and obviously false retrocausal/magical model of its environment instead.
However, I don’t see why this would be the case. It’s quite clear that either Omega has already predicted one-boxing, or Omega has already predicted two-boxing. At the very least, the evidence should narrow things down to models of either kind, although I think that AIXI should easily have sufficient evidence to work out which of them is actually true (that being the two-boxing one).
Nope, the environment q is a chronological program; it takes AIXI’s action sequence and outputs an observation sequence, with the restriction that observations cannot be dependent upon future actions. Basically, it is assumed that the universal Turing machine U is fed both the environment program q, and AIXI’s action sequence y, and outputs AIXI’s observation sequence x by running the program q with input y. Quoting from the paper I linked:
”Reversely, if q already is a binary string we define q(y):=U(q,y)”
In the paper I linked, see Eq. 21:
%20=%20\sum_{q:q(y_{1:k})=x_{1:k}}2%5E{-l(q)})and the the term ) from Eq. 22.
In other words, any program q that matches AIXI’s observations to date when given AIXI’s actions to date will be part of the ensemble. In order to evaluate different future action sequences, AIXI then evaluates the different future actions it could take by feeding them to its program ensemble, and summing over different possible future rewards conditional on the environments that output those rewards.
The CDT agent can correctly argue that Omega already left the million dollars out of the box when the CDT agent was presented the choice, but that doesn’t mean that it’s correct to be a CDT agent. My argument is that AIXI suffers from the same flaw, and so a different algorithm is needed.
Correct. My point is that AIXI’s surviving world-programs boil down to “Omega predicted I would two-box, and didn’t put the million dollars in”, but it’s the fault of the AIXI algorithm that this happens.
As per the AIXI equations, this is incorrect. AIXI cannot recognize the presence of a physical variable determining its next action because for any environment program AIXI’s evaluation stage is always going to try both the OneBox and TwoBox actions. Given the three classes of programs above, the only way AIXI can justify one-boxing is if the class (3) programs, in which its action somehow causes the contents of the box, win out.
Ok, missed that. I don’t think it matters to the rest of the argument, though.
An environment program can just assume a value for the physical variable and then abort by failing to halt if the next action doesn’t match it.
Or it can assume that the physical simulation branches at time t0, when Omega prepares the box, simulate each branch it until t < t1, when the next AIXI action occurs, and then kill off the branch corresponding to the wrong action.
Or, as it has already been proposed by somebody else, it could internally represent the physical world as a set of constraints and then run a constraint solver on it, without the need of performing a step-by-step chronological simulation.
So it seems that there are plenty of environment programs that can represent the action of Omega without assuming that it violates the known laws of physics. But even if it had to, what is the problem? AIXI doesn’t assume that the laws of physics forbid retro-causality.
Why would AIXI come up with something like that? Any such program is clearly more complex than one that does the same thing but doesn’t fail to halt.
Once again, possible but unnecessarily complex to explain AIXI’s observations.
Sure, but the point is that those constraints would still be physics-like in nature. Omega’s prediction accuracy is much better explained by constraints that are physics-like rather than an extra constraint that says “Omega is always right”. if you assume a constraint of the latter kind, you are still forced to explain all the other aspects of Omega—things like Omega walking, Omega speaking, and Omega thinking, or more generally Omega doing all those things that ze does. Also, if Omega isn’t always right, but is instead right only 99% of the time, then the constraint-based approach is penalized further.
It doesn’t assume that, no, but because it assumes that its observations cannot be affected by its future actions AIXI is still very much restricted in that regard.
My point is a simple one: If AIXI is going to end-up one-boxing, the simplest model of Omega will be one that used its prediction method and already predicted that AIXI would one-box. If AIXI is going to end up two-boxing, the simplest model of Omega will be one that used its prediction method and already predicted that AIXI would two-box.
However, if Omega predicted one-boxing and AIXI realized that this was the case, AIXI would still evaluate that the two-boxing action results in AIXI getting more money than the one-boxing action, which means that AIXI would two-box. As long as Omega is capable of reaching this relatively simple logical conclusion, Omega thereby knows that a prediction of one-boxing would turn out to be wrong, and hence Omega should predict two-boxing; this will, of course, turn out to be correct.
The kinds of models you’re suggesting, with branching etc. are significantly more complex and don’t really serve to explain anything.
But this doesn’t matter for Newcomb’s problem, since AIXI observes the content of the opaque box only after it has made its decision.
Which means that the epistemic model was flawed with high probability.
You are insisting that the flawed model is simpler that the correct one. This may be true for certain states of evidence where AIXI is not convinced that Omega works as advertised, but you haven’t shown that this is true for all possible states of evidence.
They may be more complex only up to a small constant overhead (how many bits does it take to include a condition “if OmegaPrediction != NextAction then loop forever”?), therefore, a constant amount of evidence should be sufficient to select them.
Yes, AIXI’s epistemic model will be flawed. This is necessarily true because AIXI is not capable of coming up with the true model of Newcomb’s problem, which is one in which its action and Omega’s prediction of its action share a common cause. Since AIXI isn’t capable of having a self-model, the only way it could possibly replicate the behaviour of the true model is by inserting retrocausality and/or magic into its environment.
I’m not even sure AIXI is capable of considering programs of this kind, but even if it is, what kind of evidence can AIXI have received that would justify the condition “if OmegaPrediction != NextAction then loop forever”? What evidence would justify such a model over a strictly simpler version without that condition?
Essentially, you’re arguing that rather than coming up with a correct model of its environment (e.g. one in which Omega makes a prediction on the basis of the AIXI equation), AIXI will somehow make up for its inability to self-model by coming up with an inaccurate and obviously false retrocausal/magical model of its environment instead.
However, I don’t see why this would be the case. It’s quite clear that either Omega has already predicted one-boxing, or Omega has already predicted two-boxing. At the very least, the evidence should narrow things down to models of either kind, although I think that AIXI should easily have sufficient evidence to work out which of them is actually true (that being the two-boxing one).