Seems obvious to me that AIXI is describing a fully general learner, which is not the same as a FAI by any stretch. In particular, it’s missing all of the optimizations you might gain by narrowing the scope, and it’s completely unfriendly. It’s a pure utility maximizer, which means it’s a step down from a smiley-face maximizer in terms of safety—it has no humane values.
An AIXI solving a mathematical game would optimize. An AIXI operating in the real world would waste an awful lot of time learning basic physics, and then wirehead—if you were lucky.
Seems obvious to me that AIXI is describing a fully general learner, which is not the same as a FAI by any stretch. In particular, it’s missing all of the optimizations you might gain by narrowing the scope, and it’s completely unfriendly. It’s a pure utility maximizer, which means it’s a step down from a smiley-face maximizer in terms of safety—it has no humane values.
An AIXI solving a mathematical game would optimize. An AIXI operating in the real world would waste an awful lot of time learning basic physics, and then wirehead—if you were lucky.