RSS

Evolution

TagLast edit: 2 Oct 2020 18:31 UTC by Ruby

Evolution is “change in the heritable characteristics of biological populations over successive generations” (Wikipedia). For posts about machine learning look here.

Related: Biology, Evolutionary Psychology,

The sequence, The Simple Math of Evolution provides a good introduction to LessWrong thinking about evolution.

Why be interested in evolution?

Firstly, evolution is a useful case study of humans’ ability (or inability) to model the real world. This is because it has a single clear criterion (“relative reproductive fitness”) which is selected (optimized) for:

“If we can’t see clearly the result of a single monotone optimization criterion—if we can’t even train ourselves to hear a single pure note—then how will we listen to an orchestra? How will we see that “Always be selfish” or “Always obey the government” are poor guiding principles for human beings to adopt—if we think that even optimizing genes for inclusive fitness will yield organisms which sacrifice reproductive opportunities in the name of social resource conservation?

To train ourselves to see clearly, we need simple practice cases”—Eliezer Yudkowsky, Fake Optimisation Criteria

Secondly, much of rationality necessarily revolves around the human brain (for now). An understanding of how it came into being can be very helpful both for understanding ‘bugs’ in the system (like superstimuli), and for explaining Complexity of Value, among others.

A candy bar is a superstimulus: it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment. A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed in the hunter-gatherer environment. The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn’t in the training dataset—an impossibly distant outlier on the old ancestral graphs.
-- Eliezer Yudkowsky, Superstimuli and the Collapse of Western Civilisation

See also

External links

Summaries of Sequence’s Posts on Evolution

The following are summaries of posts concerning evolution in the Eliezer’s sequences:

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Eliezer Yudkowsky11 Nov 2007 6:39 UTC
156 points
33 comments3 min readLW link

The Talk: a brief ex­pla­na­tion of sex­ual dimorphism

Malmesbury18 Sep 2023 16:23 UTC
492 points
72 comments16 min readLW link

The Tragedy of Group Selectionism

Eliezer Yudkowsky7 Nov 2007 7:47 UTC
121 points
89 comments5 min readLW link

7. Evolu­tion and Ethics

RogerDearnaley15 Feb 2024 23:38 UTC
3 points
6 comments6 min readLW link

Prob­lems in evolu­tion­ary psy­chol­ogy

Kaj_Sotala13 Aug 2010 18:57 UTC
85 points
102 comments8 min readLW link

Re­quire­ments for a Basin of At­trac­tion to Alignment

RogerDearnaley14 Feb 2024 7:10 UTC
38 points
10 comments31 min readLW link

An Alien God

Eliezer Yudkowsky2 Nov 2007 6:57 UTC
213 points
162 comments8 min readLW link

Evolv­ing to Extinction

Eliezer Yudkowsky16 Nov 2007 7:18 UTC
132 points
44 comments6 min readLW link

The in­no­cent gene

Joe Carlsmith5 Apr 2021 3:31 UTC
39 points
3 comments9 min readLW link

A com­mon mis­con­cep­tion about the evolu­tion of viruses

mukashi6 Jan 2022 10:00 UTC
7 points
15 comments1 min readLW link

Will the grow­ing deer prion epi­demic spread to hu­mans? Why not?

eukaryote25 Jun 2023 4:31 UTC
170 points
33 comments13 min readLW link
(eukaryotewritesblog.com)

Stud­ies On Slack

Scott Alexander13 May 2020 5:00 UTC
165 points
34 comments24 min readLW link1 review
(slatestarcodex.com)

Ge­netic fit­ness is a mea­sure of se­lec­tion strength, not the se­lec­tion target

Kaj_Sotala4 Nov 2023 19:02 UTC
56 points
43 comments18 min readLW link

Con­jur­ing An Evolu­tion To Serve You

Eliezer Yudkowsky19 Nov 2007 5:55 UTC
75 points
26 comments4 min readLW link

Evolu­tion of Modularity

johnswentworth14 Nov 2019 6:49 UTC
185 points
12 comments2 min readLW link1 review

Analo­gies and Gen­eral Pri­ors on Intelligence

20 Aug 2021 21:03 UTC
57 points
12 comments14 min readLW link

Evolu­tion­ary Psychology

Eliezer Yudkowsky11 Nov 2007 20:41 UTC
103 points
42 comments5 min readLW link

The her­i­ta­bil­ity of hu­man val­ues: A be­hav­ior ge­netic cri­tique of Shard Theory

geoffreymiller20 Oct 2022 15:51 UTC
80 points
59 comments21 min readLW link

There’s no such thing as a tree (phy­lo­ge­net­i­cally)

eukaryote3 May 2021 3:47 UTC
336 points
59 comments8 min readLW link2 reviews
(eukaryotewritesblog.com)

Con­di­tions for math­e­mat­i­cal equiv­alence of Stochas­tic Gra­di­ent Des­cent and Nat­u­ral Selection

Oliver Sourbut9 May 2022 21:38 UTC
70 points
19 comments8 min readLW link1 review
(www.oliversourbut.net)

Protein Re­in­force­ment and DNA Consequentialism

Eliezer Yudkowsky13 Nov 2007 1:34 UTC
61 points
20 comments4 min readLW link

Be­ware of Stephen J. Gould

Eliezer Yudkowsky6 Nov 2007 5:22 UTC
60 points
80 comments6 min readLW link

Evolu­tions Are Stupid (But Work Any­way)

Eliezer Yudkowsky3 Nov 2007 15:45 UTC
103 points
68 comments4 min readLW link

Book Re­view: Why Every­one (Else) Is a Hypocrite

PeterMcCluskey9 Oct 2021 3:31 UTC
26 points
7 comments3 min readLW link
(www.bayesianinvestor.com)

Try­ing to al­ign hu­mans with in­clu­sive ge­netic fitness

peterbarnett11 Jan 2024 0:13 UTC
23 points
5 comments10 min readLW link

[Question] What’s this 3rd se­cret di­rec­tive of evolu­tion called? (sur­vive & spread & ___)

lemonhope7 Feb 2024 14:11 UTC
10 points
11 comments1 min readLW link

The Stan­dard Analogy

Zack_M_Davis3 Jun 2024 17:15 UTC
118 points
28 comments12 min readLW link

The Great Or­ganism The­ory of Evolution

rogersbacon10 Aug 2024 12:26 UTC
20 points
0 comments6 min readLW link
(www.secretorum.life)

[Question] What are the pri­mary drivers that caused se­lec­tion pres­sure for in­tel­li­gence in hu­mans?

Towards_Keeperhood7 Nov 2024 9:40 UTC
4 points
15 comments1 min readLW link

Les­sons from Con­ver­gent Evolu­tion for AI Alignment

27 Mar 2023 16:25 UTC
54 points
9 comments8 min readLW link

AI and Evolution

Dan H30 Mar 2023 12:56 UTC
27 points
4 comments2 min readLW link
(arxiv.org)

Su­per­stim­uli and the Col­lapse of Western Civilization

Eliezer Yudkowsky16 Mar 2007 18:10 UTC
150 points
89 comments4 min readLW link

Co­or­di­na­tion Prob­lems in Evolu­tion: Ei­gen’s Paradox

Martin Sustrik12 Oct 2018 12:40 UTC
105 points
6 comments8 min readLW link
(250bpm.com)

Com­puter bugs and evolution

PhilGoetz26 Oct 2009 22:06 UTC
55 points
10 comments1 min readLW link

No Evolu­tions for Cor­po­ra­tions or Nanodevices

Eliezer Yudkowsky17 Nov 2007 2:24 UTC
102 points
32 comments6 min readLW link

Co­or­di­na­tion Prob­lems in Evolu­tion: The Rise of Eukaryotes

Martin Sustrik15 Oct 2018 6:18 UTC
49 points
8 comments8 min readLW link

The Won­der of Evolution

Eliezer Yudkowsky2 Nov 2007 20:49 UTC
92 points
85 comments4 min readLW link

My idea of sa­cred­ness, di­v­inity, and religion

Kaj_Sotala29 Oct 2023 12:50 UTC
40 points
10 comments4 min readLW link
(kajsotala.fi)

As­sor­ta­tive Mat­ing And Autism

Scott Alexander28 Jan 2020 18:20 UTC
50 points
2 comments4 min readLW link
(slatestarcodex.com)

[link] Back to the trees

[deleted]4 Nov 2011 22:06 UTC
132 points
47 comments2 min readLW link

The Oc­to­pus, the Dolphin and Us: a Great Filter tale

Stuart_Armstrong3 Sep 2014 21:37 UTC
76 points
236 comments3 min readLW link

Your Evolved Intuitions

lukeprog5 May 2011 16:21 UTC
22 points
106 comments10 min readLW link

Why would evolu­tion fa­vor more bad?

KatjaGrace6 Oct 2013 18:26 UTC
1 point
0 comments3 min readLW link

The Psy­cholog­i­cal Diver­sity of Mankind

Kaj_Sotala9 May 2010 5:53 UTC
142 points
162 comments7 min readLW link

The Psy­cholog­i­cal Unity of Humankind

Eliezer Yudkowsky24 Jun 2008 7:12 UTC
61 points
23 comments4 min readLW link

Thou Art Godshatter

Eliezer Yudkowsky13 Nov 2007 19:38 UTC
235 points
83 comments5 min readLW link

In Search of Slack

Martin Sustrik23 May 2020 11:20 UTC
47 points
3 comments6 min readLW link
(250bpm.com)

Grow­ing Up is Hard

Eliezer Yudkowsky4 Jan 2009 3:55 UTC
55 points
41 comments7 min readLW link

Biomimetic al­ign­ment: Align­ment be­tween an­i­mal genes and an­i­mal brains as a model for al­ign­ment be­tween hu­mans and AI sys­tems

geoffreymiller8 Jun 2023 16:05 UTC
10 points
1 comment16 min readLW link

Cri­sis of Faith case study: be­yond re­duc­tion­ism?

MalcolmOcean8 Jun 2023 6:11 UTC
6 points
9 comments19 min readLW link

Per­pet­u­ally De­clin­ing Pop­u­la­tion?

jefftk8 Aug 2023 1:30 UTC
48 points
29 comments3 min readLW link
(www.jefftk.com)

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_Sotala2 Oct 2020 11:50 UTC
46 points
1 comment3 min readLW link
(kajsotala.fi)

The Dar­win Game

lsusr9 Oct 2020 10:19 UTC
91 points
131 comments3 min readLW link

We don’t un­der­stand what hap­pened with cul­ture enough

Jan_Kulveit9 Oct 2023 9:54 UTC
86 points
21 comments6 min readLW link

Re­sponse to Quintin Pope’s Evolu­tion Pro­vides No Ev­i­dence For the Sharp Left Turn

Zvi5 Oct 2023 11:39 UTC
129 points
29 comments9 min readLW link

Sny­der-Beat­tie, Sand­berg, Drexler & Bon­sall (2020): The Timing of Evolu­tion­ary Tran­si­tions Suggests In­tel­li­gent Life Is Rare

Kaj_Sotala24 Nov 2020 10:36 UTC
83 points
20 comments2 min readLW link
(www.liebertpub.com)

Evolu­tion Solved Align­ment (what sharp left turn?)

jacob_cannell12 Oct 2023 4:15 UTC
16 points
89 comments4 min readLW link

Against evolu­tion as an anal­ogy for how hu­mans will cre­ate AGI

Steven Byrnes23 Mar 2021 12:29 UTC
65 points
25 comments25 min readLW link

An­thropic Effects in Es­ti­mat­ing Evolu­tion Difficulty

Mark Xu5 Jul 2021 4:02 UTC
12 points
2 comments3 min readLW link

The two-headed bacterium

Malmesbury10 Aug 2021 15:28 UTC
66 points
4 comments7 min readLW link

[Book Re­view] “The Vi­tal Ques­tion” by Nick Lane

lsusr27 Sep 2021 22:50 UTC
70 points
28 comments5 min readLW link

Con­tra Paul Chris­ti­ano on Sex

George3d61 Oct 2021 11:17 UTC
22 points
19 comments9 min readLW link
(cerebralab.com)

[Book Re­view] Evolu­tion of Sex

Alex Hollow3 Oct 2021 18:54 UTC
36 points
0 comments8 min readLW link
(alexhollow.wordpress.com)

[Question] Do you like ex­ces­sive sugar?

momom29 Oct 2021 10:40 UTC
3 points
11 comments1 min readLW link

[Question] To­tal com­pute available to evolution

redbird9 Jan 2022 3:25 UTC
16 points
18 comments1 min readLW link

Whence the sexes?

Richard_Ngo13 Feb 2022 20:40 UTC
61 points
35 comments3 min readLW link
(thinkingcomplete.blogspot.com)

Re­plac­ing Nat­u­ral Interpretations

adamShimi16 Mar 2022 13:05 UTC
19 points
0 comments7 min readLW link

Ex­am­in­ing Evolu­tion as an Up­per Bound for AGI Timelines

meanderingmoose24 Apr 2022 19:08 UTC
5 points
1 comment9 min readLW link

We haven’t quit evolu­tion [short]

the gears to ascension6 Jun 2022 19:07 UTC
5 points
3 comments2 min readLW link

Hu­man val­ues & bi­ases are in­ac­cessible to the genome

TurnTrout7 Jul 2022 17:29 UTC
94 points
54 comments6 min readLW link1 review

How evolu­tion suc­ceeds and fails at value alignment

Ocracoke21 Aug 2022 7:14 UTC
21 points
2 comments4 min readLW link

Orexin and the quest for more wak­ing hours

ChristianKl24 Sep 2022 19:54 UTC
130 points
39 comments5 min readLW link

Win­ning is for Losers

Jacob Falkovich11 Oct 2017 4:01 UTC
31 points
12 comments18 min readLW link
(putanumonit.com)

Notes From an Apocalypse

Toggle22 Sep 2017 5:10 UTC
59 points
24 comments14 min readLW link

You’re En­ti­tled to Ar­gu­ments, But Not (That Par­tic­u­lar) Proof

Eliezer Yudkowsky15 Feb 2010 7:58 UTC
88 points
229 comments8 min readLW link

You’re in New­comb’s Box

HonoreDB5 Feb 2011 20:46 UTC
59 points
176 comments4 min readLW link

Hu­mans in Funny Suits

Eliezer Yudkowsky30 Jul 2008 23:54 UTC
82 points
133 comments7 min readLW link

An­thro­po­mor­phic Optimism

Eliezer Yudkowsky4 Aug 2008 20:17 UTC
81 points
60 comments5 min readLW link

“Arc­tic In­stincts? The uni­ver­sal prin­ci­ples of Arc­tic psy­cholog­i­cal adap­ta­tion and the ori­gins of East Asian psy­chol­ogy”—Call for Re­view­ers (Seeds of Science)

rogersbacon16 Feb 2024 15:02 UTC
0 points
0 comments2 min readLW link

The Goal Mis­gen­er­al­iza­tion Problem

Myspy18 May 2023 23:40 UTC
1 point
0 comments1 min readLW link
(drive.google.com)

Group se­lec­tion update

PhilGoetz1 Nov 2010 16:51 UTC
49 points
68 comments5 min readLW link

Dar­wi­nian Traps and Ex­is­ten­tial Risks

KristianRonn25 Aug 2024 22:37 UTC
76 points
14 comments10 min readLW link

The Fourth Arena 2: New be­ings in time

Bill Benzon5 Jun 2022 13:30 UTC
1 point
0 comments2 min readLW link

The Hu­man Phase Hy­poth­e­sis (Why We Might Be Alone)

ifalpha25 Aug 2023 4:37 UTC
−4 points
6 comments20 min readLW link

Neu­roevolu­tion, So­cial In­tel­li­gence, and Logic

vinnik.dmitry0731 May 2023 17:54 UTC
1 point
0 comments10 min readLW link

Three Fal­la­cies of Teleology

Eliezer Yudkowsky25 Aug 2008 22:27 UTC
36 points
14 comments9 min readLW link

Why I’m Skep­ti­cal of De-Extinction

Niko_McCarty23 Feb 2023 19:42 UTC
16 points
1 comment11 min readLW link
(cell.substack.com)

De­liber­a­tion Every­where: Sim­ple Examples

Oliver Sourbut27 Jun 2022 17:26 UTC
27 points
3 comments15 min readLW link

Contin­gency: A Con­cep­tual Tool from Evolu­tion­ary Biol­ogy for Alignment

clem_acs12 Jun 2023 20:54 UTC
57 points
2 comments14 min readLW link
(acsresearch.org)

[Question] Why do hu­mans not have built-in neu­ral i/​o chan­nels?

Richard_Ngo8 Aug 2019 13:09 UTC
25 points
23 comments1 min readLW link

What strange and an­cient things might we find be­neath the ice?

Benquo15 Jan 2018 10:10 UTC
16 points
2 comments2 min readLW link
(benjaminrosshoffman.com)

Is That Your True Re­jec­tion? by Eliezer Yud­kowsky @ Cato Unbound

XiXiDu7 Sep 2011 18:27 UTC
44 points
83 comments1 min readLW link

What is the group se­lec­tion de­bate?

Academian2 Nov 2010 2:02 UTC
38 points
16 comments3 min readLW link

Evolu­tion is an ob­ser­va­tion, not a process

Neil 6 Feb 2024 14:49 UTC
8 points
11 comments5 min readLW link

Refram­ing Evolu­tion—An in­for­ma­tion wavefront trav­el­ing through time

Joshua Clancy10 Jul 2023 22:36 UTC
1 point
0 comments5 min readLW link
(midflip.org)

Nat­u­ral Selec­tion’s Speed Limit and Com­plex­ity Bound

Eliezer Yudkowsky4 Nov 2007 16:54 UTC
11 points
105 comments5 min readLW link

Nar­ra­tive The­ory. Part 3. Sim­plest to succeed

Eris16 Jul 2023 14:41 UTC
4 points
0 comments1 min readLW link

Nar­ra­tive The­ory. Part 2. A new way of do­ing the same thing

Eris15 Jul 2023 10:37 UTC
2 points
0 comments1 min readLW link

Na­ture ab­hors an im­mutable repli­ca­tor… usually

MSRayne3 Jul 2022 15:08 UTC
28 points
10 comments3 min readLW link

More Ques­tions about Trees

digital_carver9 Oct 2020 8:35 UTC
3 points
5 comments1 min readLW link
(bit-player.org)

Mo­du­lar­ity and Buzzy

Kaj_Sotala4 Aug 2011 11:35 UTC
33 points
27 comments9 min readLW link

Why math­e­mat­ics works

Douglas_Reay8 Mar 2018 18:00 UTC
7 points
4 comments5 min readLW link

ACI#5: From Hu­man-AI Co-evolu­tion to the Evolu­tion of Value Systems

Akira Pyinya18 Aug 2023 0:38 UTC
0 points
0 comments9 min readLW link

A Failed Just-So Story

Eliezer Yudkowsky5 Jan 2008 6:35 UTC
21 points
49 comments2 min readLW link

Ob­serv­ing Optimization

Eliezer Yudkowsky21 Nov 2008 5:39 UTC
12 points
28 comments6 min readLW link

Build­ing Some­thing Smarter

Eliezer Yudkowsky2 Nov 2008 17:00 UTC
26 points
57 comments4 min readLW link

Mus­ings on Cu­mu­la­tive Cul­tural Evolu­tion and AI

calebo7 Jul 2019 16:46 UTC
19 points
5 comments7 min readLW link

The Evolu­tion­ary Path­way from Biolog­i­cal to Digi­tal In­tel­li­gence: A Cos­mic Perspective

George3605 Sep 2023 17:47 UTC
−17 points
0 comments4 min readLW link

Ma­chine Evolution

11 Sep 2023 19:29 UTC
11 points
2 comments22 min readLW link

Align­ment has a Basin of At­trac­tion: Beyond the Orthog­o­nal­ity Thesis

RogerDearnaley1 Feb 2024 21:15 UTC
13 points
15 comments13 min readLW link

The Dumbest Pos­si­ble Gets There First

Artaxerxes13 Aug 2022 10:20 UTC
44 points
7 comments2 min readLW link

On the Re­la­tion­ship Between Vari­abil­ity and the Evolu­tion­ary Out­comes of Sys­tems in Nature

Artyom Shaposhnikov7 Oct 2023 3:06 UTC
2 points
0 comments1 min readLW link

Do bam­boos set them­selves on fire?

Malmesbury19 Sep 2022 15:34 UTC
170 points
14 comments6 min readLW link1 review

[Question] Are There Other Ideas as Gen­er­ally Ap­pli­ca­ble as Nat­u­ral Selection

Amin Sennour18 May 2024 16:37 UTC
1 point
1 comment1 min readLW link

An ap­peal for vi­tamin D sup­ple­men­ta­tion as a pro­phy­lac­tic for coro­n­aviruses and in­fluenza and a sim­ple evolu­tion­ary the­ory for why this is plau­si­ble.

Michael A22 Dec 2020 19:40 UTC
11 points
1 comment9 min readLW link

Are we all mis­al­igned?

Mateusz Mazurkiewicz3 Jan 2021 2:42 UTC
11 points
0 comments5 min readLW link

On the na­ture of pur­pose

Nora_Ammann22 Jan 2021 8:30 UTC
29 points
15 comments9 min readLW link

Evolu­tions Build­ing Evolu­tions: Lay­ers of Gen­er­ate and Test

plex5 Feb 2021 18:21 UTC
12 points
1 comment6 min readLW link

Idea selection

krbouchard1 Mar 2021 14:07 UTC
1 point
0 comments2 min readLW link

Fish­e­rian Ru­n­away as a de­ci­sion-the­o­retic problem

Bunthut20 Mar 2021 16:34 UTC
11 points
0 comments3 min readLW link

What “The Mes­sage” Was For Me

Alex Beyman11 Oct 2022 8:08 UTC
−3 points
14 comments4 min readLW link

Mus­ings on Cargo Cult Consciousness

Gareth Davidson25 Jan 2024 23:00 UTC
−14 points
11 comments17 min readLW link

Values Darwinism

pchvykov22 Jan 2024 10:44 UTC
11 points
13 comments3 min readLW link

Ap­ply to the PIBBSS Sum­mer Re­search Fellowship

12 Jan 2024 4:06 UTC
39 points
1 comment2 min readLW link

[Question] How can we pre­vent AGI value drift?

Dakara20 Nov 2024 18:19 UTC
16 points
1 comment1 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo 29 Oct 2023 16:03 UTC
8 points
0 comments9 min readLW link

A Brief In­tro­duc­tion to ACI, 2: An Event-Cen­tric View

Akira Pyinya12 Apr 2023 3:23 UTC
3 points
0 comments2 min readLW link

Some real ex­am­ples of gra­di­ent hacking

Oliver Sourbut22 Nov 2021 0:11 UTC
15 points
8 comments2 min readLW link

Se­cond-or­der se­lec­tion against the immortal

Malmesbury3 Dec 2021 5:01 UTC
45 points
47 comments6 min readLW link

We Don’t Just Let Peo­ple Die—So What Next?

James Stephen Brown3 Aug 2024 1:04 UTC
11 points
8 comments10 min readLW link

[Question] Has Eliezer pub­li­cly and satis­fac­to­rily re­sponded to at­tempted re­but­tals of the anal­ogy to evolu­tion?

kaler28 Jul 2024 12:23 UTC
10 points
14 comments1 min readLW link

SYSTEMA ROBOTICA

Ali Ahmed12 Aug 2024 20:34 UTC
12 points
2 comments30 min readLW link

Sex, Death, and Complexity

Zero Contradictions25 Jul 2024 21:22 UTC
0 points
0 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Mo­ti­va­tions, Nat­u­ral Selec­tion, and Cur­ricu­lum Engineering

Oliver Sourbut16 Dec 2021 1:07 UTC
16 points
0 comments42 min readLW link

The Ge­net­ics of Space Ama­zons

Jan Christian Refsgaard30 Dec 2021 22:14 UTC
12 points
12 comments5 min readLW link

Reg­u­lariza­tion Causes Mo­du­lar­ity Causes Generalization

dkirmani1 Jan 2022 23:34 UTC
50 points
7 comments3 min readLW link

Good­hart’s Law and Emotions

Zero Contradictions7 Jul 2024 8:32 UTC
1 point
5 comments1 min readLW link
(expandingrationality.substack.com)

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:43 UTC
205 points
62 comments15 min readLW link

What boot­straps in­tel­li­gence?

invertedpassion10 Sep 2024 7:11 UTC
2 points
2 comments1 min readLW link

Emer­gence Is a Univer­sal Non-Zero-Sum Phenomenon.

James Stephen Brown14 May 2024 8:06 UTC
6 points
0 comments1 min readLW link
(nonzerosum.games)

The Dark Mir­a­cle of Optics

Suspended Reason24 Jun 2020 3:09 UTC
27 points
5 comments8 min readLW link

Take­off speeds, the chimps anal­ogy, and the Cul­tural In­tel­li­gence Hypothesis

NickGabs2 Dec 2022 19:14 UTC
16 points
2 comments4 min readLW link

Could evolu­tion pro­duce some­thing truly al­igned with its own op­ti­miza­tion stan­dards? What would an an­swer to this mean for AI al­ign­ment?

No77e8 Jan 2023 11:04 UTC
3 points
4 comments1 min readLW link

Can we simu­late hu­man evolu­tion to cre­ate a some­what al­igned AGI?

Thomas Kwa28 Mar 2022 22:55 UTC
21 points
6 comments7 min readLW link

Is Fish­e­rian Ru­n­away Gra­di­ent Hack­ing?

Ryan Kidd10 Apr 2022 13:47 UTC
15 points
6 comments4 min readLW link

“In­ner Align­ment Failures” Which Are Ac­tu­ally Outer Align­ment Failures

johnswentworth31 Oct 2020 20:18 UTC
66 points
38 comments5 min readLW link

The vir­tu­ous cir­cle: twelve con­jec­tures about fe­male re­pro­duc­tive agency and cul­tural self-determination

Miles Saltiel27 Dec 2023 18:25 UTC
0 points
2 comments14 min readLW link

A multi-dis­ci­plinary view on AI safety research

Roman Leventov8 Feb 2023 16:50 UTC
43 points
4 comments26 min readLW link

The Fourth Arena: What’s Up in the world these days? We’re mov­ing to a new, a new what?

Bill Benzon4 Jun 2022 19:07 UTC
2 points
0 comments3 min readLW link

Have we re­ally for­saken nat­u­ral se­lec­tion?

KatjaGrace27 Mar 2024 6:50 UTC
15 points
4 comments2 min readLW link
(worldspiritsockpuppet.com)

Are AIs like An­i­mals? Per­spec­tives and Strate­gies from Biology

Jackson Emanuel16 May 2023 23:39 UTC
1 point
0 comments21 min readLW link

Refer­ences & Re­sources for LessWrong

XiXiDu10 Oct 2010 14:54 UTC
167 points
104 comments20 min readLW link

He­donic asymmetries

paulfchristiano26 Jan 2020 2:10 UTC
98 points
22 comments2 min readLW link
(sideways-view.com)

Refram­ing the evolu­tion­ary benefit of sex

paulfchristiano14 Sep 2019 17:00 UTC
91 points
21 comments2 min readLW link
(sideways-view.com)
No comments.