RSS

Oc­cam’s Razor

TagLast edit: 4 Jan 2022 15:35 UTC by Gordon Seidoh Worley

Occam’s razor (more formally referred to as the principle of parsimony) is a principle commonly stated as “Entities must not be multiplied beyond necessity”. When several theories are able to explain the same observations, Occam’s razor suggests the simpler one is preferable. It must be noted that Occam’s razor is a requirement for the simplicity of theories, not for the size of the systems described by those theories. For example, the immensity of the Universe isn’t at odds with the principle of Occam’s razor.

Occam’s razor is necessitated by the conjunction rule of probability theory: the conjunction A and B is necessarily less (or equally, in the case of logical equivalence) probable than the A alone; every detail you tack onto your story drives the probability down.

Occam’s razor has been formalized as Minimum Description Length or Minimum Message Length, in which the total size of the theory is the length of the message required to describe the theory, plus the length of the message required to describe the evidence using the theory. Solomonoff induction is the ultimate case of minimum message length in which the code for messages can describe all computable hypotheses. This has jokingly been referred to as “Solomonoff’s lightsaber”.

Notable Posts

See Also

External Links

Solomonoff in­duc­tion still works if the uni­verse is un­com­putable, and its use­ful­ness doesn’t re­quire know­ing Oc­cam’s razor

Christopher King18 Jun 2023 1:52 UTC
38 points
28 comments4 min readLW link

Mes­sage Length

Zack_M_Davis20 Oct 2020 5:52 UTC
134 points
25 comments12 min readLW link

Belief in the Im­plied Invisible

Eliezer Yudkowsky8 Apr 2008 7:40 UTC
65 points
34 comments6 min readLW link

Msg Len

Zack_M_Davis12 Oct 2020 3:35 UTC
65 points
4 comments1 min readLW link

Very Short In­tro­duc­tion to Bayesian Model Com­par­i­son

johnswentworth16 Jul 2019 19:48 UTC
32 points
5 comments1 min readLW link

[Question] What is an “anti-Oc­camian prior”?

Zane23 Oct 2023 2:26 UTC
35 points
22 comments1 min readLW link

[Question] In­stru­men­tal Oc­cam?

abramdemski31 Jan 2020 19:27 UTC
30 points
15 comments1 min readLW link

Tak­ing Oc­cam Seriously

steven046129 May 2009 17:31 UTC
32 points
51 comments1 min readLW link

A Semitech­ni­cal In­tro­duc­tory Dialogue on Solomonoff Induction

Eliezer Yudkowsky4 Mar 2021 17:27 UTC
142 points
33 comments54 min readLW link

Oc­cam’s Ra­zor and the Univer­sal Prior

Peter Chatain3 Oct 2021 3:23 UTC
28 points
5 comments21 min readLW link

Math­e­mat­ics as a lossy com­pres­sion al­gorithm gone wild

Shmi6 Jun 2014 23:53 UTC
53 points
82 comments5 min readLW link

Bias to­wards sim­ple func­tions; ap­pli­ca­tion to al­ign­ment?

DavidHolmes18 Aug 2022 16:15 UTC
3 points
7 comments2 min readLW link

In­duc­tion; or, the rules and eti­quette of refer­ence class tennis

paulfchristiano3 Mar 2013 23:27 UTC
11 points
8 comments9 min readLW link

Oc­cam’s Razor

Eliezer Yudkowsky26 Sep 2007 6:36 UTC
131 points
55 comments5 min readLW link

Notes on Simplicity

David Gross2 Dec 2020 23:14 UTC
9 points
0 comments7 min readLW link

Dis­solv­ing the Prob­lem of Induction

Liron27 Dec 2020 17:58 UTC
40 points
32 comments7 min readLW link

A Proof of Oc­cam’s Razor

Unknowns10 Aug 2010 14:20 UTC
0 points
139 comments3 min readLW link

[Question] Why would code/​English or low-ab­strac­tion/​high-ab­strac­tion sim­plic­ity or brevity cor­re­spond?

curi4 Sep 2020 19:46 UTC
2 points
15 comments1 min readLW link

Psy­chic Powers

Eliezer Yudkowsky12 Sep 2008 19:28 UTC
47 points
89 comments3 min readLW link

Prob­a­bil­ity the­ory im­plies Oc­cam’s razor

Maxwell Peterson18 Dec 2020 7:48 UTC
8 points
4 comments6 min readLW link

State, Art, Identity

musq25 Jan 2021 20:22 UTC
1 point
0 comments2 min readLW link

[Question] What qual­ities does an AGI need to have to re­al­ize the risk of false vac­uum, with­out hard­cod­ing physics the­o­ries into it?

RationalSieve3 Feb 2023 16:00 UTC
1 point
4 comments1 min readLW link

What kind of place is this?

Jim Pivarski25 Feb 2023 2:14 UTC
24 points
24 comments8 min readLW link

Ex­pla­na­tions as Hard to Vary Assertions

Alexander24 Sep 2021 11:33 UTC
17 points
24 comments8 min readLW link

Not us­ing a pri­ori in­for­ma­tion for Rus­sian propaganda

EniScien24 Apr 2023 1:14 UTC
−3 points
4 comments1 min readLW link

Blunt Razor

fryolysis24 Oct 2023 17:27 UTC
3 points
0 comments2 min readLW link

The Strong Oc­cam’s Razor

cousin_it11 Nov 2010 17:28 UTC
17 points
74 comments3 min readLW link

Oc­cam’s Ra­zor May Be Suffi­cient to In­fer the Prefer­ences of Ir­ra­tional Agents: A re­ply to Arm­strong & Mindermann

Daniel Kokotajlo7 Oct 2019 19:52 UTC
47 points
39 comments7 min readLW link

Against Oc­cam’s Razor

zulupineapple5 Apr 2018 17:59 UTC
4 points
20 comments1 min readLW link

If Many-Wor­lds Had Come First

Eliezer Yudkowsky10 May 2008 7:43 UTC
89 points
189 comments9 min readLW link

Where Re­cur­sive Jus­tifi­ca­tion Hits Bottom

Eliezer Yudkowsky8 Jul 2008 10:16 UTC
123 points
81 comments10 min readLW link

A Priori

Eliezer Yudkowsky8 Oct 2007 21:02 UTC
86 points
133 comments4 min readLW link

De­co­her­ence is Simple

Eliezer Yudkowsky6 May 2008 7:44 UTC
72 points
62 comments11 min readLW link

Kevin T. Kelly’s Ock­ham Effi­ciency Theorem

Johnicholas16 Aug 2010 4:46 UTC
43 points
82 comments5 min readLW link

The prior of a hy­poth­e­sis does not de­pend on its complexity

cousin_it26 Aug 2010 13:20 UTC
34 points
69 comments1 min readLW link

How do low level hy­pothe­ses con­strain high level ones? The mys­tery of the dis­ap­pear­ing di­a­mond.

Christopher King11 Jul 2023 19:27 UTC
17 points
11 comments2 min readLW link
No comments.