I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.
See other posts on this problem (some of them are linked to in the post above).
The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn’t it know what was happening
At this point, the “hope” is that the AIXI will have made sufficient generalisations to keep it going.
See other posts on this problem (some of them are linked to in the post above).
At this point, the “hope” is that the AIXI will have made sufficient generalisations to keep it going.