AIXI is incapable of understanding the concept of copies or counterfactual versions of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
This doesn’t really clarify anything. You can consider AIXI as a formal definition of a strategy that behaves a certain way; whether this definition “understands” something is a wrong question.
No it isn’t the wrong question; it’s a human-understandable statement of a more complex formal fact.
Formally, take the Newcomb problem. Assume Omega copies AIXI and then runs it to do its prediction, then returns to the AIXI to offer the usual deal. The AIXI has various models for what the “copied AIXI run by Omega” will output, weighted by algorithmic complexity. But all these models will be wrong, as the models are all computable and the copied AIXI is not.
It runs into a similar problem when trying to self locate in a universe; its models of the universe are computable, itself is not computable, so it can’t locate itself as a piece of the universe.
Why are AIXI’s possible programs necessarily “models for what the “copied AIXI run by Omega” will output” (generating programs specifically, I assume you mean)? They could be interpreted in many possible ways (and as you point out, they actually can’t be interpreted in this particular way). For Newcomb’s problem, we have the similar problem as with CM of explaining to AIXI the problem statement, and it’s not clear how to formalize this procedure in case of AIXI’s alien ontology, if you don’t automatically assume that its programs must be interpreted as the programs generating the toy worlds of thought experiments (that in general can’t include AIXI, though they can include AIXI-determined actions; you can have an uncomputable definition that defines a program).
AIXI is incapable of understanding the concept of copies or counterfactual versions of itself. In fact, it’s incapable of finding itself in the universe at all. Daniel Dewy did this in detail, but the simple version is that AIXI is an uncomputable algorithm that models the whole universe as computable.
This doesn’t really clarify anything. You can consider AIXI as a formal definition of a strategy that behaves a certain way; whether this definition “understands” something is a wrong question.
No it isn’t the wrong question; it’s a human-understandable statement of a more complex formal fact.
Formally, take the Newcomb problem. Assume Omega copies AIXI and then runs it to do its prediction, then returns to the AIXI to offer the usual deal. The AIXI has various models for what the “copied AIXI run by Omega” will output, weighted by algorithmic complexity. But all these models will be wrong, as the models are all computable and the copied AIXI is not.
It runs into a similar problem when trying to self locate in a universe; its models of the universe are computable, itself is not computable, so it can’t locate itself as a piece of the universe.
Why are AIXI’s possible programs necessarily “models for what the “copied AIXI run by Omega” will output” (generating programs specifically, I assume you mean)? They could be interpreted in many possible ways (and as you point out, they actually can’t be interpreted in this particular way). For Newcomb’s problem, we have the similar problem as with CM of explaining to AIXI the problem statement, and it’s not clear how to formalize this procedure in case of AIXI’s alien ontology, if you don’t automatically assume that its programs must be interpreted as the programs generating the toy worlds of thought experiments (that in general can’t include AIXI, though they can include AIXI-determined actions; you can have an uncomputable definition that defines a program).
You’re right, I over-simplified. What AIXI would do in these situations is dependent on how exactly the problem—and AIXI—is specified.