RSS

Optimization

TagLast edit: 30 Sep 2020 19:18 UTC by Ruby

An optimization process is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process moves the world into a specific and unexpected set of states by searching through a large search space, hitting small and low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.

The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.

Consider the human being. We are a highly complex object with a low probability to have been created by chance—natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This body, as well as other organisms, had the chance (was selected) to develop because it is in itself a rather efficient replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can’t do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the other hand, are more domain-general optimization processes than Deep Blue, but that doesn’t mean they’re more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it’s not obvious what it would mean for “evolution” to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Measuring Optimization Power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can’t apply the math directly.

Further Reading & References

See also

The ground of optimization

Alex Flint20 Jun 2020 0:38 UTC
247 points
80 comments27 min readLW link1 review

Mea­sur­ing Op­ti­miza­tion Power

Eliezer Yudkowsky27 Oct 2008 21:44 UTC
89 points
37 comments6 min readLW link

Optimization

Eliezer Yudkowsky13 Sep 2008 16:00 UTC
54 points
45 comments5 min readLW link

Op­ti­miza­tion Amplifies

Scott Garrabrant27 Jun 2018 1:51 UTC
114 points
12 comments4 min readLW link

Selec­tion vs Control

abramdemski2 Jun 2019 7:01 UTC
172 points
25 comments11 min readLW link2 reviews

DL to­wards the un­al­igned Re­cur­sive Self-Op­ti­miza­tion attractor

jacob_cannell18 Dec 2021 2:15 UTC
32 points
22 comments4 min readLW link

Aiming at the Target

Eliezer Yudkowsky26 Oct 2008 16:47 UTC
40 points
40 comments5 min readLW link

Risks from Learned Op­ti­miza­tion: Introduction

31 May 2019 23:44 UTC
185 points
42 comments12 min readLW link3 reviews

Thoughts and prob­lems with Eliezer’s mea­sure of op­ti­miza­tion power

Stuart_Armstrong8 Jun 2012 9:44 UTC
36 points
24 comments5 min readLW link

Beren’s “De­con­fus­ing Direct vs Amor­tised Op­ti­mi­sa­tion”

DragonGod7 Apr 2023 8:57 UTC
52 points
10 comments3 min readLW link

The Op­ti­mizer’s Curse and How to Beat It

lukeprog16 Sep 2011 2:46 UTC
99 points
84 comments3 min readLW link

Bot­tle Caps Aren’t Optimisers

DanielFilan31 Aug 2018 18:30 UTC
100 points
23 comments3 min readLW link1 review
(danielfilan.com)

Op­ti­mal­ity is the tiger, and agents are its teeth

Veedrac2 Apr 2022 0:46 UTC
318 points
42 comments16 min readLW link1 review

Steer­ing systems

Max H4 Apr 2023 0:56 UTC
50 points
1 comment15 min readLW link

Re­quire­ments for a STEM-ca­pa­ble AGI Value Learner (my Case for Less Doom)

RogerDearnaley25 May 2023 9:26 UTC
33 points
3 comments15 min readLW link

Good­hart’s Curse and Limi­ta­tions on AI Alignment

Gordon Seidoh Worley19 Aug 2019 7:57 UTC
25 points
18 comments10 min readLW link

Towards Mea­sures of Optimisation

12 May 2023 15:29 UTC
53 points
37 comments4 min readLW link

Op­ti­miza­tion Con­cepts in the Game of Life

16 Oct 2021 20:51 UTC
75 points
16 comments10 min readLW link

Mean­ing & Agency

abramdemski19 Dec 2023 22:27 UTC
91 points
17 comments14 min readLW link

Gaia Net­work: a prac­ti­cal, in­cre­men­tal path­way to Open Agency Architecture

20 Dec 2023 17:11 UTC
22 points
8 comments16 min readLW link

Search­ing for Search­ing for Search

Rubi J. Hudson14 Feb 2024 23:51 UTC
21 points
4 comments7 min readLW link

Difficulty classes for al­ign­ment properties

Jozdien20 Feb 2024 9:08 UTC
34 points
5 comments2 min readLW link

Towards a for­mal­iza­tion of the agent struc­ture problem

Alex_Altair29 Apr 2024 20:28 UTC
54 points
5 comments14 min readLW link

Fat Tails Dis­cour­age Compromise

niplav17 Jun 2024 9:39 UTC
53 points
5 comments1 min readLW link

Clar­ify­ing mesa-optimization

21 Mar 2023 15:53 UTC
38 points
6 comments10 min readLW link

Draft: In­tro­duc­tion to optimization

Alex_Altair26 Mar 2023 17:25 UTC
43 points
8 comments16 min readLW link

Draft: The op­ti­miza­tion toolbox

Alex_Altair28 Mar 2023 20:40 UTC
20 points
1 comment7 min readLW link

Draft: De­tect­ing optimization

Alex_Altair29 Mar 2023 20:17 UTC
23 points
2 comments6 min readLW link

Draft: In­fer­ring minimizers

Alex_Altair1 Apr 2023 20:20 UTC
9 points
0 comments1 min readLW link

Con­se­quen­tial­ism is in the Stars not Ourselves

DragonGod24 Apr 2023 0:02 UTC
7 points
19 comments5 min readLW link

[Question] How Many Bits Of Op­ti­miza­tion Can One Bit Of Ob­ser­va­tion Un­lock?

johnswentworth26 Apr 2023 0:26 UTC
61 points
32 comments3 min readLW link

Game The­ory with­out Argmax [Part 1]

Cleo Nardo11 Nov 2023 15:59 UTC
69 points
18 comments19 min readLW link

Game The­ory with­out Argmax [Part 2]

Cleo Nardo11 Nov 2023 16:02 UTC
31 points
14 comments13 min readLW link

What is op­ti­miza­tion power, for­mally?

sbenthall18 Oct 2014 18:37 UTC
18 points
16 comments2 min readLW link

Math­e­mat­i­cal Mea­sures of Op­ti­miza­tion Power

Alex_Altair24 Nov 2012 10:55 UTC
8 points
16 comments5 min readLW link

Op­ti­miza­tion Provenance

Adele Lopez23 Aug 2019 20:08 UTC
38 points
5 comments5 min readLW link

Two senses of “op­ti­mizer”

Joar Skalse21 Aug 2019 16:02 UTC
35 points
41 comments3 min readLW link

Is the term mesa op­ti­mizer too nar­row?

Matthew Barnett14 Dec 2019 23:20 UTC
39 points
21 comments1 min readLW link

Mesa-Op­ti­miz­ers vs “Steered Op­ti­miz­ers”

Steven Byrnes10 Jul 2020 16:49 UTC
45 points
7 comments8 min readLW link

Mesa-Op­ti­miz­ers and Over-op­ti­miza­tion Failure (Op­ti­miz­ing and Good­hart Effects, Clar­ify­ing Thoughts—Part 4)

Davidmanheim12 Aug 2019 8:07 UTC
15 points
3 comments4 min readLW link

The Credit As­sign­ment Problem

abramdemski8 Nov 2019 2:50 UTC
103 points
40 comments17 min readLW link1 review

Fake Op­ti­miza­tion Criteria

Eliezer Yudkowsky10 Nov 2007 0:10 UTC
72 points
21 comments3 min readLW link

Search ver­sus design

Alex Flint16 Aug 2020 16:53 UTC
108 points
40 comments36 min readLW link1 review

The First World Takeover

Eliezer Yudkowsky19 Nov 2008 15:00 UTC
42 points
24 comments6 min readLW link

Life’s Story Continues

Eliezer Yudkowsky21 Nov 2008 23:05 UTC
24 points
14 comments5 min readLW link

Utility Max­i­miza­tion = De­scrip­tion Length Minimization

johnswentworth18 Feb 2021 18:04 UTC
213 points
44 comments5 min readLW link

Ap­pli­ca­tions for De­con­fus­ing Goal-Directedness

adamShimi8 Aug 2021 13:05 UTC
38 points
3 comments5 min readLW link1 review

A new defi­ni­tion of “op­ti­mizer”

Chantiel9 Aug 2021 13:42 UTC
5 points
0 comments7 min readLW link

Mea­sure­ment, Op­ti­miza­tion, and Take-off Speed

jsteinhardt10 Sep 2021 19:30 UTC
48 points
4 comments13 min readLW link

In Defence of Op­ti­miz­ing Rou­tine Tasks

leogao9 Nov 2021 5:09 UTC
47 points
6 comments3 min readLW link1 review

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
130 points
61 comments39 min readLW link1 review

Defin­ing “op­ti­mizer”

Chantiel17 Apr 2021 15:38 UTC
9 points
6 comments1 min readLW link

Bits of Op­ti­miza­tion Can Only Be Lost Over A Distance

johnswentworth23 May 2022 18:55 UTC
31 points
18 comments2 min readLW link

Distributed Decisions

johnswentworth29 May 2022 2:43 UTC
66 points
6 comments6 min readLW link

Quan­tify­ing Gen­eral Intelligence

JasonBrown17 Jun 2022 21:57 UTC
9 points
6 comments13 min readLW link

Defin­ing Op­ti­miza­tion in a Deeper Way Part 1

J Bostock1 Jul 2022 14:03 UTC
7 points
0 comments2 min readLW link

Notes on Simplicity

David Gross2 Dec 2020 23:14 UTC
9 points
0 comments7 min readLW link

Defin­ing Op­ti­miza­tion in a Deeper Way Part 2

J Bostock11 Jul 2022 20:29 UTC
7 points
0 comments4 min readLW link

Defin­ing Op­ti­miza­tion in a Deeper Way Part 3

J Bostock20 Jul 2022 22:06 UTC
8 points
0 comments2 min readLW link

Defin­ing Op­ti­miza­tion in a Deeper Way Part 4

J Bostock28 Jul 2022 17:02 UTC
7 points
0 comments5 min readLW link

Ab­stract­ing The Hard­ness of Align­ment: Un­bounded Atomic Optimization

adamShimi29 Jul 2022 18:59 UTC
68 points
3 comments16 min readLW link

Vingean Agency

abramdemski24 Aug 2022 20:08 UTC
62 points
14 comments3 min readLW link

pro­gram searches

Tamsin Leake5 Sep 2022 20:04 UTC
21 points
2 comments2 min readLW link
(carado.moe)

“Nor­mal” is the equil­ibrium state of past op­ti­miza­tion processes

Alex_Altair30 Oct 2022 19:03 UTC
81 points
5 comments5 min readLW link

What I Learned Run­ning Refine

adamShimi24 Nov 2022 14:49 UTC
108 points
5 comments4 min readLW link

Don’t al­ign agents to eval­u­a­tions of plans

TurnTrout26 Nov 2022 21:16 UTC
45 points
49 comments18 min readLW link

Op­por­tu­nity Cost Blackmail

adamShimi2 Jan 2023 13:48 UTC
70 points
11 comments2 min readLW link
(epistemologicalvigilance.substack.com)

De­con­fus­ing Direct vs Amor­tised Optimization

beren2 Dec 2022 11:30 UTC
117 points
19 comments10 min readLW link

[Question] Do the Safety Prop­er­ties of Pow­er­ful AI Sys­tems Need to be Ad­ver­sar­i­ally Ro­bust? Why?

DragonGod9 Feb 2023 13:36 UTC
22 points
42 comments2 min readLW link

Fun­da­men­tal Uncer­tainty: Chap­ter 4 - Why don’t we do what we think we should?

Gordon Seidoh Worley29 Aug 2022 19:25 UTC
15 points
6 comments13 min readLW link

Bridg­ing Ex­pected Utility Max­i­miza­tion and Optimization

Whispermute5 Aug 2022 8:18 UTC
25 points
5 comments14 min readLW link

Ru­n­away Op­ti­miz­ers in Mind Space

silentbob16 Jul 2023 14:26 UTC
16 points
0 comments12 min readLW link

Go­ing Beyond Lin­ear Mode Con­nec­tivity: The Lay­er­wise Lin­ear Fea­ture Connectivity

zhanpeng_zhou20 Jul 2023 17:38 UTC
22 points
13 comments3 min readLW link
(openreview.net)

[Question] What are ex­am­ples of some­one do­ing a lot of work to find the best of some­thing?

chanamessinger27 Jul 2023 15:58 UTC
29 points
16 comments1 min readLW link

Op­ti­mi­sa­tion Mea­sures: Desider­ata, Im­pos­si­bil­ity, Proposals

7 Aug 2023 15:52 UTC
35 points
9 comments1 min readLW link

Goldilocks and the Three Optimisers

dkl917 Aug 2023 18:15 UTC
−10 points
0 comments5 min readLW link
(dkl9.net)

Ob­serv­ing Optimization

Eliezer Yudkowsky21 Nov 2008 5:39 UTC
12 points
28 comments6 min readLW link

Satis­ficers want to be­come maximisers

Stuart_Armstrong21 Oct 2011 16:27 UTC
38 points
70 comments1 min readLW link

Evolu­tions Build­ing Evolu­tions: Lay­ers of Gen­er­ate and Test

plex5 Feb 2021 18:21 UTC
12 points
1 comment6 min readLW link

Sur­pris­ing ex­am­ples of non-hu­man optimization

Jan_Rzymkowski14 Jun 2015 17:05 UTC
31 points
9 comments1 min readLW link

I missed the crux of the al­ign­ment prob­lem the whole time

zeshen13 Aug 2022 10:11 UTC
53 points
7 comments3 min readLW link

What’s Gen­eral-Pur­pose Search, And Why Might We Ex­pect To See It In Trained ML Sys­tems?

johnswentworth15 Aug 2022 22:48 UTC
146 points
18 comments10 min readLW link

Dis­cov­er­ing Agents

zac_kenton18 Aug 2022 17:33 UTC
73 points
11 comments6 min readLW link

The Gears of Argmax

StrivingForLegibility4 Jan 2024 23:30 UTC
11 points
0 comments3 min readLW link

Ac­ci­den­tal Optimizers

aysajan22 Sep 2021 13:27 UTC
7 points
2 comments3 min readLW link

The Carnot Eng­ine of Economics

StrivingForLegibility9 Aug 2024 15:59 UTC
5 points
0 comments5 min readLW link

The sling­shot helps with learning

Wilson Wu31 Oct 2024 23:18 UTC
33 points
0 comments8 min readLW link

In­ter­view with Bill O’Rourke—Rus­sian Cor­rup­tion, Putin, Ap­plied Ethics, and More

JohnGreer27 Oct 2024 17:11 UTC
3 points
0 comments6 min readLW link

Ex­tinc­tion Risks from AI: In­visi­ble to Science?

21 Feb 2024 18:07 UTC
24 points
7 comments1 min readLW link
(arxiv.org)

In­ter­lude: But Who Op­ti­mizes The Op­ti­mizer?

Paul Bricman23 Sep 2022 15:30 UTC
15 points
0 comments10 min readLW link

Un­der­stand­ing Gra­di­ent Hacking

peterbarnett10 Dec 2021 15:58 UTC
41 points
5 comments30 min readLW link

Trans­form­ing my­opic op­ti­miza­tion to or­di­nary op­ti­miza­tion—Do we want to seek con­ver­gence for my­opic op­ti­miza­tion prob­lems?

tailcalled11 Dec 2021 20:38 UTC
12 points
1 comment5 min readLW link

Hy­poth­e­sis: gra­di­ent de­scent prefers gen­eral circuits

Quintin Pope8 Feb 2022 21:12 UTC
46 points
26 comments11 min readLW link

Wild­fire of strategicness

TsviBT5 Jun 2023 13:59 UTC
38 points
19 comments1 min readLW link

Op­ti­miz­ing crop plant­ing with mixed in­te­ger lin­ear pro­gram­ming in Stardew Valley

hapanin5 Apr 2022 18:42 UTC
67 points
4 comments7 min readLW link

(Struc­tural) Sta­bil­ity of Cou­pled Optimizers

Paul Bricman30 Sep 2022 11:28 UTC
25 points
0 comments10 min readLW link

Think­ing about max­i­miza­tion and corrigibility

James Payor21 Apr 2023 21:22 UTC
63 points
4 comments5 min readLW link

Ad­ver­sar­ial at­tacks and op­ti­mal control

Jan22 May 2022 18:22 UTC
17 points
7 comments8 min readLW link
(universalprior.substack.com)

Ar­chi­tec­ture-aware op­ti­mi­sa­tion: train ImageNet and more with­out hyperparameters

Chris Mingard22 Apr 2023 21:50 UTC
6 points
2 comments2 min readLW link

Non-re­solve as Resolve

Linda Linsefors10 Jul 2018 23:31 UTC
15 points
1 comment2 min readLW link

No free lunch the­o­rem is irrelevant

Catnee4 Oct 2022 0:21 UTC
18 points
7 comments1 min readLW link

One bit of ob­ser­va­tion can un­lock many of op­ti­miza­tion—but at what cost?

dr_s29 Apr 2023 10:53 UTC
42 points
4 comments5 min readLW link

Could Things Be Very Differ­ent?—How His­tor­i­cal In­er­tia Might Blind Us To Op­ti­mal Solutions

James Stephen Brown11 Sep 2024 9:53 UTC
5 points
0 comments8 min readLW link
(nonzerosum.games)

De­gen­era­cies are sticky for SGD

16 Jun 2024 21:19 UTC
56 points
1 comment16 min readLW link

Adam Op­ti­mizer Causes Priv­ileged Ba­sis in Trans­former LM Resi­d­ual Stream

6 Sep 2024 17:55 UTC
70 points
7 comments4 min readLW link

Op­ti­miza­tion and Ad­e­quacy in Five Bullets

james.lucassen6 Jun 2022 5:48 UTC
35 points
2 comments4 min readLW link
(jlucassen.com)

Break­ing Down Goal-Directed Behaviour

Oliver Sourbut16 Jun 2022 18:45 UTC
11 points
1 comment2 min readLW link

Safety Data Sheets for Op­ti­miza­tion Processes

StrivingForLegibility4 Jan 2024 23:30 UTC
15 points
1 comment4 min readLW link

Per­ils of op­ti­miz­ing in so­cial contexts

owencb16 Jun 2022 17:40 UTC
50 points
1 comment2 min readLW link

Plans Are Pre­dic­tions, Not Op­ti­miza­tion Targets

johnswentworth20 Oct 2022 21:17 UTC
108 points
20 comments4 min readLW link1 review

The Limits of Automation

milkandcigarettes23 Jun 2022 18:03 UTC
5 points
1 comment5 min readLW link
(milkandcigarettes.com)

Is Gen­eral In­tel­li­gence “Com­pact”?

DragonGod4 Jul 2022 13:27 UTC
27 points
6 comments22 min readLW link

The Hu­man’s Role in Mesa Optimization

silentbob9 May 2024 12:07 UTC
5 points
0 comments2 min readLW link

When Can Op­ti­miza­tion Be Done Safely?

StrivingForLegibility30 Dec 2023 1:24 UTC
12 points
0 comments3 min readLW link

De­grees of Freedom

sarahconstantin2 Apr 2019 21:10 UTC
103 points
31 comments11 min readLW link
(srconstantin.wordpress.com)

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
98 points
22 comments2 min readLW link
(sideways-view.com)

De­mons in Im­perfect Search

johnswentworth11 Feb 2020 20:25 UTC
107 points
21 comments3 min readLW link

Tes­sel­lat­ing Hills: a toy model for demons in im­perfect search

DaemonicSigil20 Feb 2020 0:12 UTC
97 points
18 comments2 min readLW link

Align­ing a toy model of optimization

paulfchristiano28 Jun 2019 20:23 UTC
53 points
25 comments3 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong7 Apr 2014 11:00 UTC
83 points
418 comments7 min readLW link

Don’t de­sign agents which ex­ploit ad­ver­sar­ial inputs

18 Nov 2022 1:48 UTC
70 points
64 comments12 min readLW link

Notes on Antelligence

Aurigena13 May 2023 18:38 UTC
2 points
0 comments9 min readLW link

Worse Than Random

Eliezer Yudkowsky11 Nov 2008 19:01 UTC
46 points
102 comments12 min readLW link

Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

Roger Dearnaley21 Feb 2023 9:05 UTC
10 points
1 comment23 min readLW link

Effi­cient Cross-Do­main Optimization

Eliezer Yudkowsky28 Oct 2008 16:33 UTC
54 points
38 comments5 min readLW link

Op­ti­miza­tion Markets

StrivingForLegibility30 Dec 2023 1:24 UTC
13 points
2 comments2 min readLW link

Some Prob­lems with Or­di­nal Op­ti­miza­tion Frame

Mateusz Bagiński6 May 2024 5:28 UTC
9 points
0 comments7 min readLW link

Op­ti­miza­tion and the Singularity

Eliezer Yudkowsky23 Jun 2008 5:55 UTC
41 points
21 comments9 min readLW link

Op­ti­miza­tion hap­pens in­side the mind, not in the world

azsantosk3 Jun 2023 21:36 UTC
17 points
10 comments5 min readLW link
No comments.