Therefore, they cannot identify “that computer running the code” with “me”, and would cheerfully destroy themselves in the pursuit of their goals/reward.
I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.
The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn’t it know what was happening (like the box AI example—not likely to work in the long term).
I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.
See other posts on this problem (some of them are linked to in the post above).
The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn’t it know what was happening
At this point, the “hope” is that the AIXI will have made sufficient generalisations to keep it going.
I am curious as to why an AIXI like entity would need to model itself (and all its possible calculations) in order to differentiate the code it is running with the external universe.
The human in charge of a reward channel could work for initial versions, but once its intelligence grew wouldn’t it know what was happening (like the box AI example—not likely to work in the long term).
See other posts on this problem (some of them are linked to in the post above).
At this point, the “hope” is that the AIXI will have made sufficient generalisations to keep it going.