AIXI is not an example of a system that can reason about goals without incurring goal instability, because it is not an example of a system that can reason about goals.
… plus we say that in the paper :)
AIXI is not an example of a system that can reason about goals without incurring goal instability, because it is not an example of a system that can reason about goals.
… plus we say that in the paper :)