Three AIXI researchers commented on a draft of this post and on Solomonoff Cartesianism. I’m posting their comments here, anonymized, for discussion. You can find AIXI Specialist #1′s comments here.
AIXI Specialist #2 wrote:
Pro: This is a mindful and well-intended dialog, way more thoughtful than the average critic of AIXI esp. by computer scientists.
Con: The write-up should be cleaned. It reads like a raw transcript of some live conversation.
Neutral: I think this is good philosophy, and potentially interesting (but only) for when AIXI reaches intelligence way beyond human. The arguments don’t show that AIXI runs into problem up-to human-level intelligence. Like discussions on (the hard problem of) consciousness are irrelevant for building AGIs.
Since an embodied AIXI can observe some aspects of its body, it will build a model of it of sufficient quality to speculate about self-modification. While formally AIXI cannot model another AIXI (or itself) perfectly, it will develop increasingly accurate approximations.
Humans will also never be able to fully understand themselves, but that is also not necessary for many self-modification thoughts, except for the most fundamental “Goedelian” questions which neither machines NOR humans can answer. Penrose is wrong.
AIXI is not limited to finite horizon. This is just the simplest model for didactic purpose. The real AIXI has either (universal) discounting (Hutter’s book) or mixes over discounted environments (Shane Legg’s thesis).
Independent of the discounting, AIXI (necessarily) includes algorithms with finite output (either looping forever) or halting. This can be interpreted as the believe in death. The a-priori probability AIXI assigns to dying at time t is about 2^-K(t), but I think it can get arbitrarily close to 1 with “appropriate” experience. Worth a formal (dis)proof.
Human children start with a 1st-person egocentric world-view, but slowly learn that a 3rd-person world model is a more useful model. How an objective world-view can emerge from a purely subjective perspective by a Solomonoff inductor has been explained in http://dx.doi.org/10.3390/a3040329
There are many subtle philosophical questions related to AIXI, but most can only be definitely solved by formal investigation. More effort should be put into that. Verbal philosophical arguments may lead to that but seldom conclusively
prove anything by themselves.
AIXI Specialist #3 wrote, in part:
Unrelated to the article, there are problems with AIXI and other Bayesian agents. For example, it was proven that there exist computable environments where no such agent can be asymptotically optimal [Asymptotically Optimal Agents, Lattimore and Hutter 2011] (an asymptotically optimally optimal policy is one that eventually converges to optimal). However a notion of weak asymptotic optimality can be defined (a weakly asymptotically optimal policy eventually converges to optimal on average), and unfortunately a pure Bayesian optimal policy doesn’t satisfy that either on some environments. However, there are agents which can be constructed that do [ibid] . The key problem is in fact that AIXI and other pure Bayesian agents stop exploring at some point, whereas for really tricky environments you need to explore
infinitely often (basically environments can be constructed that lull bayesian agents into a false sense of security and then switcharoo. These environments are really annoying. )
That being said, AIXI approximations should still perform very well in the real world. In order to have a real world agent, there needs to be restrictions on the environment class and once these restrictions are made, one can more careful tailor exploration-exploitation trade-offs as well as learning algorithms that exploit structure within the environment class to learn faster (bounds for general learning in specific environment classes are horrendous).
It is not clear that approximations of AIXI are the way to build a computable general intelligence, but AIXI (or similar) serve as useful benchmarks in a theoretical sense.
Three AIXI researchers commented on a draft of this post and on Solomonoff Cartesianism. I’m posting their comments here, anonymized, for discussion. You can find AIXI Specialist #1′s comments here.
AIXI Specialist #2 wrote:
AIXI Specialist #3 wrote, in part: