AIXI-tl is only superior to the bounded algorithms that can be formally proven to perform well within some system (and AIXI-tl itself is not an example of such an algorithm). So the properties attributed to it here are a little too strong. Also I do not think of it as a direct AIXI approximation.
The argument that an embedded AIXI cannot modify its own source code is philosophically flawed because there is no source code that implements AIXI. Though this is not exactly a substantive rebuttal, I think your criticism of AIXI should be phrased more carefully if it is to be taken completely seriously and addressed.
As other commenters have pointed out, some of the issues here seem to come from the fact that AIXI doesn’t really have any apparatus for modifying itself because that makes no sense in the framework. As the post seems to hint at, such apparatus could be introduced into the model with varying degrees of care. I would not be surprised if some naive patch is sufficient (e.g., an AIXI approximation chooses the “thoughts” executed by a lower-level agent).
AIXI-tl is only superior to the bounded algorithms that can be formally proven to perform well within some system (and AIXI-tl itself is not an example of such an algorithm). So the properties attributed to it here are a little too strong. Also I do not think of it as a direct AIXI approximation.
The argument that an embedded AIXI cannot modify its own source code is philosophically flawed because there is no source code that implements AIXI. Though this is not exactly a substantive rebuttal, I think your criticism of AIXI should be phrased more carefully if it is to be taken completely seriously and addressed.
As other commenters have pointed out, some of the issues here seem to come from the fact that AIXI doesn’t really have any apparatus for modifying itself because that makes no sense in the framework. As the post seems to hint at, such apparatus could be introduced into the model with varying degrees of care. I would not be surprised if some naive patch is sufficient (e.g., an AIXI approximation chooses the “thoughts” executed by a lower-level agent).