RSS

Solomonoff Induction

TagLast edit: 21 Apr 2022 23:14 UTC by Alex_Altair

Solomonoff induction is an inference system defined by Ray Solomonoff that will learn to correctly predict any computable sequence with only the absolute minimum amount of data. This system, in a certain sense, is the perfect universal prediction algorithm.

To summarize it very informally, Solomonoff induction works by:

Weighting hypotheses by simplicity, the system automatically incorporates a form of Occam’s razor, which is why it has been playfully referred to as Solomonoff’s lightsaber.

Solomonoff induction gets off the ground with a solution to the “problem of the priors”. Suppose that you stand before a universal prefix Turing machine U. You are interested in a certain finite output string y0. In particular, you want to know the probability that U will produce the output y0 given a random input tape. This probability is the Solomonoff a priori probability of y0.

More precisely, suppose that a particular infinite input string x0 is about to be fed into U. However, you know nothing about x0 other than that each term of the string is either 0 or 1. As far as your state of knowledge is concerned, the ith digit of x0 is as likely to be 0 as it is to be 1, for all i = 1, 2, …. You want to find the a priori probability m(y0) of the following proposition:

(*) If U takes in x0 as input, then U will produce output y0 and then halt.

Unfortunately, computing the exact value of m(y0) would require solving the halting problem, which is undecidable. Nonetheless, it is easy to derive an expression for m(y0). If U halts on an infinite input string x, then U must read only a finite initial segment of x, after which U immediately halts. We call a finite string p a self-delimiting program if and only if there exists an infinite input string x beginning with p such that U halts on x immediately after reading to the end of p. The set 𝒫 of self-delimiting programs is the prefix code for U. It is the determination of the elements of 𝒫 that requires a solution to the halting problem.

Given p ∈ 𝒫, we write “prog (x0) = p” to express the proposition that x0 begins with p, and we write “U(p) = y0″ to express the proposition that U produces output y0, and then halts, when fed any input beginning with p. Proposition (*) is then equivalent to the exclusive disjunction


p ∈ 𝒫: U(p) = y0(prog (x0) = p).
Since x0 was chosen at random from {0, 1}ω, we take the probability of prog (x0) = p to be 2 − ℓ(p), where ℓ(p) is the length of p as a bit string. Hence, the probability of (*) is


m(y0) := ∑p ∈ 𝒫: U(p) = y02 − ℓ(p).

See also

References

The Solomonoff Prior is Malign

Mark Xu14 Oct 2020 1:33 UTC
171 points
52 comments16 min readLW link3 reviews

An In­tu­itive Ex­pla­na­tion of Solomonoff Induction

Alex_Altair11 Jul 2012 8:05 UTC
159 points
225 comments24 min readLW link

A Semitech­ni­cal In­tro­duc­tory Dialogue on Solomonoff Induction

Eliezer Yudkowsky4 Mar 2021 17:27 UTC
142 points
33 comments54 min readLW link

Open Prob­lems Re­lated to Solomonoff Induction

Wei Dai6 Jun 2012 0:26 UTC
44 points
104 comments2 min readLW link

Solomonoff in­duc­tion still works if the uni­verse is un­com­putable, and its use­ful­ness doesn’t re­quire know­ing Oc­cam’s razor

Christopher King18 Jun 2023 1:52 UTC
38 points
28 comments4 min readLW link

When does ra­tio­nal­ity-as-search have non­triv­ial im­pli­ca­tions?

nostalgebraist4 Nov 2018 22:42 UTC
72 points
12 comments3 min readLW link

[Question] How is Solomonoff in­duc­tion calcu­lated in prac­tice?

Bucky4 Jun 2019 10:11 UTC
33 points
13 comments1 min readLW link

The Prob­lem of the Criterion

Gordon Seidoh Worley21 Jan 2021 15:05 UTC
61 points
63 comments10 min readLW link

K-com­plex­ity is silly; use cross-en­tropy instead

So8res20 Dec 2022 23:06 UTC
145 points
54 comments4 min readLW link2 reviews

Solomonoff Cartesianism

Rob Bensinger2 Mar 2014 17:56 UTC
51 points
51 comments25 min readLW link

[Question] Is the hu­man brain a valid choice for the Univer­sal Tur­ing Ma­chine in Solomonoff In­duc­tion?

habryka8 Dec 2018 1:49 UTC
22 points
13 comments1 min readLW link

Clar­ify­ing Con­se­quen­tial­ists in the Solomonoff Prior

Vlad Mikulik11 Jul 2018 2:35 UTC
20 points
16 comments6 min readLW link

A po­ten­tial prob­lem with us­ing Solomonoff in­duc­tion as a prior

JoshuaZ7 Apr 2011 19:27 UTC
18 points
18 comments1 min readLW link

Clar­ify­ing The Mal­ig­nity of the Univer­sal Prior: The Lex­i­cal Update

interstice15 Jan 2020 0:00 UTC
20 points
2 comments3 min readLW link

[Question] Ques­tions about Solomonoff induction

mukashi10 Jan 2024 1:16 UTC
7 points
11 comments1 min readLW link

Reflec­tive AIXI and Anthropics

Diffractor24 Sep 2018 2:15 UTC
18 points
14 comments8 min readLW link

Asymp­totic Log­i­cal Uncer­tainty: Con­crete Failure of the Solomonoff Approach

Scott Garrabrant22 Jul 2015 19:27 UTC
13 points
0 comments1 min readLW link

My im­pres­sion of sin­gu­lar learn­ing theory

Ege Erdil18 Jun 2023 15:34 UTC
47 points
30 comments2 min readLW link

Math­e­mat­i­cal In­con­sis­tency in Solomonoff In­duc­tion?

curi25 Aug 2020 17:09 UTC
7 points
15 comments2 min readLW link

Why you can’t treat de­cid­abil­ity and com­plex­ity as a con­stant (Post #1)

Noosphere8926 Jul 2023 17:54 UTC
6 points
13 comments5 min readLW link

Limited agents need ap­prox­i­mate induction

Manfred24 Apr 2015 7:42 UTC
16 points
10 comments8 min readLW link

Solomonoff In­duc­tion and Sleep­ing Beauty

ike17 Nov 2020 2:28 UTC
7 points
0 comments2 min readLW link

Solomonoff In­duc­tion ex­plained via di­a­log.

panickedapricott21 Sep 2017 5:27 UTC
3 points
0 comments1 min readLW link
(arbital.com)

Oc­cam’s Ra­zor and the Univer­sal Prior

Peter Chatain3 Oct 2021 3:23 UTC
28 points
5 comments21 min readLW link

What is the ad­van­tage of the Kol­mogorov com­plex­ity prior?

skepsci16 Feb 2012 1:51 UTC
18 points
29 comments2 min readLW link

Are the fun­da­men­tal phys­i­cal con­stants com­putable?

Yair Halberstadt5 Apr 2022 15:05 UTC
15 points
6 comments2 min readLW link

From the “weird math ques­tions” de­part­ment...

CronoDAS9 Aug 2012 7:19 UTC
7 points
50 comments1 min readLW link

Pas­cal’s Mug­ging: Tiny Prob­a­bil­ities of Vast Utilities

Eliezer Yudkowsky19 Oct 2007 23:37 UTC
112 points
353 comments4 min readLW link

Mul­ti­ple Wor­lds, One Univer­sal Wave Function

evhub4 Nov 2020 22:28 UTC
60 points
76 comments61 min readLW link

Com­pu­ta­tional Model: Causal Di­a­grams with Symmetry

johnswentworth22 Aug 2019 17:54 UTC
53 points
29 comments4 min readLW link

An ad­di­tional prob­lem with Solomonoff induction

gedymin22 Jan 2014 23:34 UTC
3 points
51 comments4 min readLW link

Re­marks 1–18 on GPT (com­pressed)

Cleo Nardo20 Mar 2023 22:27 UTC
146 points
35 comments31 min readLW link

[Question] Why would code/​English or low-ab­strac­tion/​high-ab­strac­tion sim­plic­ity or brevity cor­re­spond?

curi4 Sep 2020 19:46 UTC
2 points
15 comments1 min readLW link

Ap­prox­i­mat­ing Solomonoff Induction

Houshalter29 May 2015 12:23 UTC
13 points
45 comments3 min readLW link

The power of finite and the weak­ness of in­finite bi­nary point numbers

AxiomWriter20 Apr 2024 6:03 UTC
−3 points
6 comments2 min readLW link

Pre­dic­tion can be Outer Aligned at Optimum

Lukas Finnveden10 Jan 2021 18:48 UTC
15 points
12 comments11 min readLW link

Ex­cerpt from Ar­bital Solomonoff in­duc­tion dialogue

Richard_Ngo17 Jan 2021 3:49 UTC
36 points
6 comments5 min readLW link
(arbital.com)

What pro­gram struc­tures en­able effi­cient in­duc­tion?

Daniel C5 Sep 2024 10:12 UTC
21 points
5 comments3 min readLW link

Re­sponse to “What does the uni­ver­sal prior ac­tu­ally look like?”

michaelcohen20 May 2021 16:12 UTC
36 points
33 comments18 min readLW link

An at­tempt to break cir­cu­lar­ity in science

fryolysis15 Jul 2022 18:32 UTC
3 points
5 comments1 min readLW link

Does Solomonoff always win?

cousin_it23 Feb 2011 20:42 UTC
14 points
56 comments2 min readLW link

Sum­mary of the Acausal At­tack Is­sue for AIXI

Diffractor13 Dec 2021 8:16 UTC
12 points
6 comments4 min readLW link

[Question] Gen­er­al­iza­tion of the Solomonoff In­duc­tion to Ac­cu­racy—Is it pos­si­ble? Would it be use­ful?

PeterL20 Feb 2022 19:29 UTC
2 points
1 comment1 min readLW link

Com­men­su­rable Scien­tific Paradigms; or, com­putable induction

samshap13 Apr 2022 0:01 UTC
14 points
0 comments5 min readLW link

The Solomonoff prior is ma­lign. It’s not a big deal.

Charlie Steiner25 Aug 2022 8:25 UTC
41 points
9 comments7 min readLW link

A Brief In­tro­duc­tion to Al­gorith­mic Com­mon In­tel­li­gence, ACI . 1

Akira Pyinya5 Apr 2023 5:43 UTC
−2 points
1 comment2 min readLW link

A Brief In­tro­duc­tion to ACI, 2: An Event-Cen­tric View

Akira Pyinya12 Apr 2023 3:23 UTC
3 points
0 comments2 min readLW link

Pro­saic mis­al­ign­ment from the Solomonoff Predictor

Cleo Nardo9 Dec 2022 17:53 UTC
42 points
3 comments5 min readLW link

Solomonoff In­duc­tion, by Shane Legg

cousin_it21 Feb 2011 0:32 UTC
21 points
8 comments1 min readLW link

all claw, no world — and other thoughts on the uni­ver­sal distribution

Tamsin Leake14 Dec 2022 18:55 UTC
15 points
0 comments7 min readLW link
(carado.moe)

(A Failed Ap­proach) From Prece­dent to Utility Function

Akira Pyinya29 Apr 2023 21:55 UTC
0 points
2 comments4 min readLW link

In­tu­itive Ex­pla­na­tion of Solomonoff Induction

lukeprog1 Dec 2011 6:56 UTC
14 points
31 comments10 min readLW link

Beyond Re­wards and Values: A Non-du­al­is­tic Ap­proach to Univer­sal Intelligence

Akira Pyinya30 Dec 2022 19:05 UTC
10 points
4 comments14 min readLW link

Solomonoff’s solip­sism

Mergimio H. Doefevmil8 May 2023 6:55 UTC
−13 points
9 comments1 min readLW link

Belief in the Im­plied Invisible

Eliezer Yudkowsky8 Apr 2008 7:40 UTC
65 points
34 comments6 min readLW link

ACI #3: The Ori­gin of Goals and Utility

Akira Pyinya17 May 2023 20:47 UTC
1 point
0 comments6 min readLW link

Weak ar­gu­ments against the uni­ver­sal prior be­ing malign

X4vier14 Jun 2018 17:11 UTC
50 points
23 comments3 min readLW link

De­co­her­ence is Simple

Eliezer Yudkowsky6 May 2008 7:44 UTC
72 points
62 comments11 min readLW link

This Ter­ri­tory Does Not Exist

ike13 Aug 2020 0:30 UTC
7 points
197 comments7 min readLW link

The Ethics of ACI

Akira Pyinya16 Feb 2023 23:51 UTC
−8 points
0 comments3 min readLW link

“The Solomonoff Prior is Mal­ign” is a spe­cial case of a sim­pler argument

David Matolcsi17 Nov 2024 21:32 UTC
75 points
10 comments12 min readLW link

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher King11 Jul 2023 19:27 UTC
17 points
11 comments2 min readLW link

The prior of a hy­poth­e­sis does not de­pend on its complexity

cousin_it26 Aug 2010 13:20 UTC
34 points
69 comments1 min readLW link

Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

Roger Dearnaley21 Feb 2023 9:05 UTC
10 points
1 comment23 min readLW link
No comments.