It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent’s decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.
In other words, the agent X, with complexity K(X), must model Omega which has complexity K(X + “put $1 million in box B if X does not take box A”), which is slightly greater than K(X).
In the framework of the ideal rational agent in AIXI, the agent guesses that Omega is the shortest program consistent with the observed interaction so far. But it can never guess Omega because its complexity is greater than that of the agent. Since AIXI is optimal, no other agent can make a rational choice either.
As an aside, this is also a wonderful demonstration of the illusion of free will.
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object.
Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity.
Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.
But a rational choice by the agent requires that it simulate Omega
Not so. I don’t need to simulate a hungry tiger in order to stay safely (and rationally) away from it, even though I don’t know the exact methods by which its brain will identify me as a tasty treat. If you think that one can’t “rationally” stay away from hungry tigers, then we’re using the word “rationally” vastly differently.
It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent’s decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.
In other words, the agent X, with complexity K(X), must model Omega which has complexity K(X + “put $1 million in box B if X does not take box A”), which is slightly greater than K(X).
In the framework of the ideal rational agent in AIXI, the agent guesses that Omega is the shortest program consistent with the observed interaction so far. But it can never guess Omega because its complexity is greater than that of the agent. Since AIXI is optimal, no other agent can make a rational choice either.
As an aside, this is also a wonderful demonstration of the illusion of free will.
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object.
Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity.
Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.
Not so. I don’t need to simulate a hungry tiger in order to stay safely (and rationally) away from it, even though I don’t know the exact methods by which its brain will identify me as a tasty treat. If you think that one can’t “rationally” stay away from hungry tigers, then we’re using the word “rationally” vastly differently.