Why are AIXI’s possible programs necessarily “models for what the “copied AIXI run by Omega” will output” (generating programs specifically, I assume you mean)? They could be interpreted in many possible ways (and as you point out, they actually can’t be interpreted in this particular way). For Newcomb’s problem, we have the similar problem as with CM of explaining to AIXI the problem statement, and it’s not clear how to formalize this procedure in case of AIXI’s alien ontology, if you don’t automatically assume that its programs must be interpreted as the programs generating the toy worlds of thought experiments (that in general can’t include AIXI, though they can include AIXI-determined actions; you can have an uncomputable definition that defines a program).
Why are AIXI’s possible programs necessarily “models for what the “copied AIXI run by Omega” will output” (generating programs specifically, I assume you mean)? They could be interpreted in many possible ways (and as you point out, they actually can’t be interpreted in this particular way). For Newcomb’s problem, we have the similar problem as with CM of explaining to AIXI the problem statement, and it’s not clear how to formalize this procedure in case of AIXI’s alien ontology, if you don’t automatically assume that its programs must be interpreted as the programs generating the toy worlds of thought experiments (that in general can’t include AIXI, though they can include AIXI-determined actions; you can have an uncomputable definition that defines a program).
You’re right, I over-simplified. What AIXI would do in these situations is dependent on how exactly the problem—and AIXI—is specified.