Another problem connected, but possibly not limited to embodied agents, especially if they are rewarded by humans, is the following: Sufficiently intelligent agents may increase their rewards by psychologically manipulating their human “teachers”, or by threatening them. This is a general sociological problem which successful AI will cause, which has nothing specifically to do with AIXI.
These days, one might say: “this is a general sociological problem which pure reinforcement learning agents will cause—which illustrates why we should not build them.”
Marcus Hutter once wrote:
These days, one might say: “this is a general sociological problem which pure reinforcement learning agents will cause—which illustrates why we should not build them.”