It takes, as input, a description of the agent it’s predicting; typically source code, but in the case of AIXI, it gets the AIXI equation and a sequence of prior observations for AIXI.
As for what it does, it spends some period of time (maybe a very long one) on whatever kind of deductive and/or inductive reasoning it chooses to do in order to establish with a reasonable level of confidence what the agent it’s trying to predict will do.
Yes, AIXI being uncomputable means that Omega can’t simply run the equation for itself, but there is no need for a perfect prediction here. On the whole, it just needs to be able to come up with a well-reasoned argument for why AIXI will take a particular action, or perhaps run an approximation of AIXI for a while. Moreover, anyone in this thread arguing for either one-boxing or two-boxing has already implicitly agreed with this assumption.
Yes, AIXI being uncomputable means that Omega can’t simply run the equation for itself, but there is no need for a perfect prediction here. On the whole, it just needs to be able to come up with a well-reasoned argument for why AIXI will take a particular action, or perhaps run an approximation of AIXI for a while.
This opens up the possibility that AIXI figures out that Omega is going to mispredict it, which would make TwoBoxing the best decision.
Moreover, anyone in this thread arguing for either one-boxing or two-boxing has already implicitly agreed with this assumption.
I think it is generally assumed that, even if Omega is not a perfect predictor, the agent can’t outsmart it and predict its errors. But if Omega is computable and the agent is uncomputable, this doesn’t necessarily hold true.
Moreover, anyone in this thread arguing for either one-boxing or two-boxing has already implicitly agreed with this assumption.
I’m not so sure this is true now. People in this thread arguing that AIXI does something at least have the advantage that AIXI’s decision is not going to depend on how they do the arguing. The fact that AIXI can simulate Omega with perfect fidelity (assuming Omega is not also a hypercomputer) and will make its decision based on the simulation seems like it might impact Omega’s ability to make a good prediction.
Alternatively, what about a version of Newcomb’s problem where the predictor’s source code is shown to AIXI before it makes its decision?
What would the source code of an Omega able to predict an AIXI look like?
It won’t have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.
It takes, as input, a description of the agent it’s predicting; typically source code, but in the case of AIXI, it gets the AIXI equation and a sequence of prior observations for AIXI.
As for what it does, it spends some period of time (maybe a very long one) on whatever kind of deductive and/or inductive reasoning it chooses to do in order to establish with a reasonable level of confidence what the agent it’s trying to predict will do.
Yes, AIXI being uncomputable means that Omega can’t simply run the equation for itself, but there is no need for a perfect prediction here. On the whole, it just needs to be able to come up with a well-reasoned argument for why AIXI will take a particular action, or perhaps run an approximation of AIXI for a while. Moreover, anyone in this thread arguing for either one-boxing or two-boxing has already implicitly agreed with this assumption.
This opens up the possibility that AIXI figures out that Omega is going to mispredict it, which would make TwoBoxing the best decision.
I think it is generally assumed that, even if Omega is not a perfect predictor, the agent can’t outsmart it and predict its errors. But if Omega is computable and the agent is uncomputable, this doesn’t necessarily hold true.
I’m not so sure this is true now. People in this thread arguing that AIXI does something at least have the advantage that AIXI’s decision is not going to depend on how they do the arguing. The fact that AIXI can simulate Omega with perfect fidelity (assuming Omega is not also a hypercomputer) and will make its decision based on the simulation seems like it might impact Omega’s ability to make a good prediction.