a physical, approximate implementation of AIXI is likely to develop a reductionist world model, doubt that its decisions have any effect on reality, and begin behaving completely erratically.
Uh huh. So what is your theory about why Hutter and Legg haven’t noticed this so far?
AIXI works fine in the setting where it is supposed to work. I don’t think anyone expects that AIXI works well when it is very powerful and actually embedded in the universe. My comment doesn’t really change the state of affairs (since it already seems pretty indisputable that AIXI will do undesirable things like breaking itself).
Uh huh. So what is your theory about why Hutter and Legg haven’t noticed this so far?
AIXI works fine in the setting where it is supposed to work. I don’t think anyone expects that AIXI works well when it is very powerful and actually embedded in the universe. My comment doesn’t really change the state of affairs (since it already seems pretty indisputable that AIXI will do undesirable things like breaking itself).