RSS

AIXI

TagLast edit: 29 Aug 2021 23:51 UTC by Pablo

AIXI is a mathematical formalism for a hypothetical (super)intelligence, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn’t do).

See also: Solomonoff induction, Decision theory, AI

The AIXI formalism says roughly to consider all possible computable models of the environment, Bayes-update them on past experiences, and use the resulting updated predictions to model the expected sensory reward of all possible strategies. This is an application of Solomonoff Induction.

AIXI can be viewed as the border between AI problems that would be ‘simple’ to solve using unlimited computing power and problems which are structurally ‘complicated’.

How AIXI works

Hutter (2007) describes AIXI as a combination of decision theory and algorithmic information theory: “Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence.”

AIXI operates within the following agent model: There is an agent, and an environment, which is a computable function unknown to the agent. Thus the agent will need to have a probability distribution on the range of possible environments.

On each clock tick, the agent receives an observation (a bitstring/​number) from the environment, as well as a reward (another number).

The agent then outputs an action (another number).

To do this, AIXI guesses at a probability distribution for its environment, using Solomonoff induction, a formalization of Occam’s razor: Simpler computations are more likely a priori to describe the environment than more complex ones. This probability distribution is then Bayes-updated by how well each model fits the evidence (or more precisely, by throwing out all computations which have not exactly fit the environmental data so far, but for technical reasons this is roughly equivalent as a model). AIXI then calculates the expected reward of each action it might choose—weighting the likelihood of possible environments as mentioned. It chooses the best action by extrapolating its actions into its future time horizon recursively, using the assumption that at each step into the future it will again choose the best possible action using the same procedure.

Then, on each iteration, the environment provides an observation and reward as a function of the full history of the interaction; the agent likewise is choosing its action as a function of the full history.

The agent’s intelligence is defined by its expected reward across all environments, weighting their likelihood by their complexity.

AIXI is not a feasible AI, because Solomonoff induction is not computable, and because some environments may not interact over finite time horizons (AIXI only works over some finite time horizon, though any finite horizon can be chosen). A somewhat more computable variant is the time-space-bounded AIXItl. Real AI algorithms explicitly inspired by AIXItl, e.g. the Monte Carlo approximation by Veness et al. (2011) have shown interesting results in simple general-intelligence test problems.

For a short (half-page) technical introduction to AIXI, see Veness et al. 2011, page 1-2. For a full exposition of AIXI, see Hutter 2007.

Relevance to Friendly AI

Because it abstracts optimization power away from human mental features, AIXI is valuable in considering the possibilities for future artificial general intelligence—a compact and non-anthropomorphic specification that is technically complete and closed; either some feature of AIXI follows from the equations or it does not. In particular, it acts as a constructive demonstration of an AGI which does not have human-like terminal values and will act solely to maximize its reward function. (Yampolskiy & Fox 2012).

AIXI has limitations as a model for future AGI, for example, the Anvil problem: AIXI lacks a self-model. It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future. Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could change its behavior due to bugs; and the AIXI formalism completely ignores these possibilities.

References

Blog posts

See also

An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

Alex_Altair11 Jul 2012 8:05 UTC
159 points
225 comments24 min readLW link

Failures of an em­bod­ied AIXI

So8res15 Jun 2014 18:29 UTC
50 points
46 comments12 min readLW link

Ap­prox­i­mately Bayesian Rea­son­ing: Knigh­tian Uncer­tainty, Good­hart, and the Look-Else­where Effect

RogerDearnaley26 Jan 2024 3:58 UTC
16 points
2 comments11 min readLW link

In­tu­itive Ex­pla­na­tion of AIXI

Thomas Larsen12 Jun 2022 21:41 UTC
21 points
0 comments5 min readLW link

Pro­gram Search and In­com­plete Understanding

Diffractor29 Apr 2018 4:32 UTC
21 points
1 comment4 min readLW link

New in­tro text­book on AIXI

Alex_Altair11 May 2024 18:18 UTC
44 points
8 comments1 min readLW link

Would AIXI pro­tect it­self?

Stuart_Armstrong9 Dec 2011 12:29 UTC
15 points
23 comments3 min readLW link

[video] Paul Chris­ti­ano’s im­promptu tu­to­rial on AIXI and TDT

lukeprog19 Mar 2012 17:20 UTC
12 points
13 comments1 min readLW link

How to make AIXI-tl in­ca­pable of learning

itaibn027 Jan 2014 0:05 UTC
7 points
5 comments2 min readLW link

Help re­quest: What is the Kol­mogorov com­plex­ity of com­putable ap­prox­i­ma­tions to AIXI?

AnnaSalamon5 Dec 2010 10:23 UTC
9 points
9 comments1 min readLW link

“AIXIjs: A Soft­ware Demo for Gen­eral Re­in­force­ment Learn­ing”, As­lanides 2017

gwern29 May 2017 21:09 UTC
7 points
1 comment1 min readLW link
(arxiv.org)

Can AIXI be trained to do any­thing a hu­man can?

Stuart_Armstrong20 Oct 2014 13:12 UTC
5 points
9 comments2 min readLW link

LW is to ra­tio­nal­ity as AIXI is to intelligence

XiXiDu6 Mar 2011 20:24 UTC
3 points
46 comments4 min readLW link

Math­e­mat­ics for AIXI and Gödel machine

Faustus222 Jul 2015 18:52 UTC
1 point
6 comments1 min readLW link

Cor­rigi­bil­ity for AIXI via dou­ble indifference

Stuart_Armstrong4 May 2016 14:00 UTC
0 points
0 comments4 min readLW link

The “best pre­dic­tor is mal­i­cious op­ti­miser” problem

Donald Hobson29 Jul 2020 11:49 UTC
14 points
10 comments2 min readLW link

Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

Noosphere8926 Jul 2023 17:54 UTC
6 points
13 comments5 min readLW link

Oc­cam’s Ra­zor and the Univer­sal Prior

Peter Chatain3 Oct 2021 3:23 UTC
28 points
5 comments21 min readLW link

Po­ten­tial Align­ment men­tal tool: Keep­ing track of the types

Donald Hobson22 Nov 2021 20:05 UTC
29 points
1 comment2 min readLW link

mAIry’s room: AI rea­son­ing to solve philo­soph­i­cal problems

Stuart_Armstrong5 Mar 2019 20:24 UTC
87 points
41 comments6 min readLW link2 reviews

The Prob­lem with AIXI

Rob Bensinger18 Mar 2014 1:55 UTC
44 points
79 comments23 min readLW link

Ver­sions of AIXI can be ar­bi­trar­ily stupid

Stuart_Armstrong10 Aug 2015 13:23 UTC
30 points
59 comments1 min readLW link

Reflec­tive AIXI and Anthropics

Diffractor24 Sep 2018 2:15 UTC
18 points
14 comments8 min readLW link

AIXI and Ex­is­ten­tial Despair

paulfchristiano8 Dec 2011 20:03 UTC
23 points
38 comments6 min readLW link

A util­ity-max­i­miz­ing vari­ent of AIXI

AlexMennen17 Dec 2012 3:48 UTC
26 points
20 comments5 min readLW link

Failures of UDT-AIXI, Part 1: Im­proper Randomizing

Diffractor6 Jan 2019 3:53 UTC
14 points
3 comments4 min readLW link

Save the princess: A tale of AIXI and util­ity functions

Anja1 Feb 2013 15:38 UTC
24 points
11 comments6 min readLW link

AIXI-style IQ tests

gwern29 Jan 2011 0:49 UTC
14 points
7 comments1 min readLW link

Open Prob­lems in AIXI Agent Foundations

Cole Wyeth12 Sep 2024 15:38 UTC
35 points
2 comments10 min readLW link

Hut­ter-Prize for Prompts

rokosbasilisk24 Mar 2023 21:26 UTC
5 points
10 comments1 min readLW link

Pro­posal: Us­ing Monte Carlo tree search in­stead of RLHF for al­ign­ment research

Christopher King20 Apr 2023 19:57 UTC
2 points
7 comments3 min readLW link

In­ter­view with Vanessa Kosoy on the Value of The­o­ret­i­cal Re­search for AI

WillPetillo4 Dec 2023 22:58 UTC
37 points
0 comments35 min readLW link

Free Will and Dodg­ing Anvils: AIXI Off-Policy

Cole Wyeth29 Aug 2024 22:42 UTC
37 points
12 comments9 min readLW link

Univer­sal agents and util­ity functions

Anja14 Nov 2012 4:05 UTC
43 points
38 comments6 min readLW link

Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

Akira Pyinya30 Dec 2022 19:05 UTC
10 points
4 comments14 min readLW link

Solomonoff Cartesianism

Rob Bensinger2 Mar 2014 17:56 UTC
51 points
51 comments25 min readLW link

The Ethics of ACI

Akira Pyinya16 Feb 2023 23:51 UTC
−8 points
0 comments3 min readLW link

Sum­mary of the Acausal At­tack Is­sue for AIXI

Diffractor13 Dec 2021 8:16 UTC
12 points
6 comments4 min readLW link

Meta Pro­gram­ming GPT: A route to Su­per­in­tel­li­gence?

dmtea11 Jul 2020 14:51 UTC
10 points
7 comments4 min readLW link
No comments.