RSS

Evolu­tion­ary Psychology

TagLast edit: Dec 27, 2023, 10:01 AM by RogerDearnaley

Evolution, the cause of the diversity of biological life on Earth, does not work like humans do, and does not design things the way a human engineer would. This blind idiot god is also the source and patterner of human beings. “Nothing in biology makes sense except in the light of evolution,” said Theodosius Dobzhansky. Humans brains are also biology, and nothing about our thinking makes sense except in the light of evolution.

Consider, for example, the following tale:

A man and a woman meet in a bar. The man is attracted to her form and clear complexion, which would have been fertility cues in the ancestral environment, but which in this case result from makeup and a bra. This does not bother the man; he just likes the way she looks. His clear-complexion-detecting neural circuitry does not know that its purpose is to detect fertility, any more than the atoms in his hand contain tiny little XML tags reading “<purpose>pick things up</​purpose>”. The woman is attracted to his confident smile and firm manner, cues to high status, which in the ancestral environment would have signified the ability to provide resources for children. She plans to use birth control, but her confident-smile-detectors don’t know this any more than a toaster knows its designer intended it to make toast. She’s not concerned philosophically with the meaning of this rebellion, because her brain is a creationist and denies vehemently that evolution exists. He’s not concerned philosophically with the meaning of this rebellion, because he just wants to get laid. They go to a hotel, and undress. He puts on a condom, because he doesn’t want kids, just the dopamine-noradrenaline rush of sex, which reliably produced offspring 50,000 years ago when it was an invariant feature of the ancestral environment that condoms did not exist. They have sex, and shower, and go their separate ways. The main objective consequence is to keep the bar and the hotel and condom-manufacturer in business; which was not the cognitive purpose in their minds, and has virtually nothing to do with the key statistical regularities of reproduction 50,000 years ago which explain how they got the genes that built their brains that executed all this behavior.

This only makes sense in the light of evolution as a designer—that we are poorly optimized to reproduce by a blind and unforesightful god.

The idea of evolution as the idiot designer of humans—that our brains are not consistently well-designed—is a key element of many of the explanations of human errors that appear on this website.

Some of the key ideas of evolutionary psychology are these:

External links

See also

Cyn­i­cism in Ev-Psych (and Econ?)

Eliezer YudkowskyFeb 11, 2009, 3:06 PM
37 points
40 comments4 min readLW link

An Espe­cially Ele­gant Evpsych Experiment

Eliezer YudkowskyFeb 13, 2009, 2:58 PM
76 points
41 comments4 min readLW link

Why Align­ing an LLM is Hard, and How to Make it Easier

RogerDearnaleyJan 23, 2025, 6:44 AM
30 points
3 comments4 min readLW link

The Psy­cholog­i­cal Diver­sity of Mankind

Kaj_SotalaMay 9, 2010, 5:53 AM
144 points
162 comments7 min readLW link

Ge­netic fit­ness is a mea­sure of se­lec­tion strength, not the se­lec­tion target

Kaj_SotalaNov 4, 2023, 7:02 PM
58 points
44 comments18 min readLW link

I learn bet­ter when I frame learn­ing as Vengeance for losses in­curred through ig­no­rance, and you might too

chaosmageOct 15, 2022, 12:41 PM
84 points
9 comments3 min readLW link1 review

Evolu­tion­ary Psychology

Eliezer YudkowskyNov 11, 2007, 8:41 PM
104 points
42 comments5 min readLW link

Prob­lems in evolu­tion­ary psy­chol­ogy

Kaj_SotalaAug 13, 2010, 6:57 PM
85 points
102 comments8 min readLW link

The Psy­cholog­i­cal Unity of Humankind

Eliezer YudkowskyJun 24, 2008, 7:12 AM
62 points
23 comments4 min readLW link

Protein Re­in­force­ment and DNA Consequentialism

Eliezer YudkowskyNov 13, 2007, 1:34 AM
62 points
20 comments4 min readLW link

Ba­bies and Bun­nies: A Cau­tion About Evo-Psych

AlicornFeb 22, 2010, 1:53 AM
81 points
843 comments2 min readLW link

In­stru­men­tal vs. Epistemic—A Bardic Perspective

MBlumeApr 25, 2009, 7:41 AM
95 points
189 comments3 min readLW link

Alien Axiology

snerxApr 20, 2023, 12:27 AM
3 points
2 comments5 min readLW link

Could evolu­tion have se­lected for moral re­al­ism?

John_MaxwellSep 27, 2012, 4:25 AM
7 points
53 comments3 min readLW link

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer YudkowskyOct 22, 2007, 11:42 PM
381 points
106 comments3 min readLW link

The biolog­i­cal func­tion of love for non-kin is to gain the trust of peo­ple we can­not deceive

chaosmageNov 7, 2022, 8:26 PM
43 points
3 comments8 min readLW link

Prefer­ence Inversion

BenquoJan 2, 2025, 6:15 PM
51 points
46 comments4 min readLW link
(benjaminrosshoffman.com)

Guilt: Another Gift No­body Wants

Scott AlexanderMar 31, 2011, 12:27 AM
102 points
103 comments8 min readLW link

A “Bit­ter Les­son” Ap­proach to Align­ing AGI and ASI

RogerDearnaleyJul 6, 2024, 1:23 AM
60 points
39 comments24 min readLW link

At­ten­tion to snakes not fear of snakes: evolu­tion en­cod­ing en­vi­ron­men­tal knowl­edge in periph­eral systems

Kaj_SotalaOct 2, 2020, 11:50 AM
46 points
1 comment3 min readLW link
(kajsotala.fi)

Are most per­son­al­ity di­s­or­ders re­ally trust di­s­or­ders?

chaosmageFeb 6, 2024, 12:37 PM
20 points
4 comments1 min readLW link

Ra­tional vs. Scien­tific Ev-Psych

Eliezer YudkowskyJan 4, 2008, 7:01 AM
35 points
50 comments3 min readLW link

Links for May

Kaj_SotalaJun 1, 2024, 10:20 AM
20 points
16 comments18 min readLW link
(kajsotala.fi)

Neu­ro­science of hu­man so­cial in­stincts: a sketch

Steven ByrnesNov 22, 2024, 4:16 PM
69 points
0 comments31 min readLW link

Lev­er­age, Exit Costs, and Anger: Re-ex­am­in­ing Why We Ex­plode at Home, Not at Work

at_the_zooApr 1, 2025, 6:28 PM
20 points
2 comments2 min readLW link

Would Your Real Prefer­ences Please Stand Up?

Scott AlexanderAug 8, 2009, 10:57 PM
93 points
132 comments4 min readLW link

The Gift We Give To Tomorrow

Eliezer YudkowskyJul 17, 2008, 6:07 AM
152 points
100 comments8 min readLW link

How AGI will ac­tu­ally end us: Some pre­dic­tions on evolu­tion by ar­tifi­cial selection

James CarneyApr 10, 2023, 1:52 PM
−11 points
1 comment13 min readLW link

Book Re­view: Oral­ity and Liter­acy: The Tech­nol­o­giz­ing of the Word

Fergus FettesOct 28, 2023, 8:12 PM
13 points
0 comments16 min readLW link

GPT-2 XL’s ca­pac­ity for co­her­ence and on­tol­ogy clustering

MiguelDevOct 30, 2023, 9:24 AM
6 points
2 comments41 min readLW link

My Dat­ing Plan ala Ge­offrey Miller

snog toddgrassJul 17, 2020, 4:52 AM
2 points
57 comments3 min readLW link

He­donic asymmetries

paulfchristianoJan 26, 2020, 2:10 AM
98 points
22 comments2 min readLW link
(sideways-view.com)

Ends Don’t Jus­tify Means (Among Hu­mans)

Eliezer YudkowskyOct 14, 2008, 9:00 PM
203 points
97 comments4 min readLW link

Spec­u­la­tive Evopsych, Ep. 1

Optimization ProcessNov 22, 2018, 7:00 PM
41 points
9 comments1 min readLW link

The Mo­ral Coper­ni­can Principle

LegionnaireMay 2, 2023, 3:25 AM
5 points
7 comments2 min readLW link

De­tached Lever Fallacy

Eliezer YudkowskyJul 31, 2008, 6:57 PM
91 points
43 comments7 min readLW link

Rea­son­ing isn’t about logic (it’s about ar­gu­ing)

MorendilMar 14, 2010, 4:42 AM
66 points
31 comments3 min readLW link

Trivers on Self-Deception

Scott AlexanderJul 12, 2011, 9:04 PM
66 points
27 comments4 min readLW link

Why Sup­port the Un­der­dog?

Scott AlexanderApr 5, 2009, 12:01 AM
43 points
102 comments3 min readLW link

Re­bel­ling Within Nature

Eliezer YudkowskyJul 13, 2008, 12:32 PM
43 points
38 comments8 min readLW link

The Evolu­tion­ary-Cog­ni­tive Boundary

Eliezer YudkowskyFeb 12, 2009, 4:44 PM
51 points
29 comments3 min readLW link

Sym­pa­thetic Minds

Eliezer YudkowskyJan 19, 2009, 9:31 AM
70 points
27 comments5 min readLW link

Minds: An Introduction

Rob BensingerMar 11, 2015, 7:00 PM
52 points
2 comments6 min readLW link

Is That Your True Re­jec­tion? by Eliezer Yud­kowsky @ Cato Unbound

XiXiDuSep 7, 2011, 6:27 PM
44 points
83 comments1 min readLW link

Shittests are ac­tu­ally good

snog toddgrassSep 24, 2020, 5:20 PM
−11 points
23 comments2 min readLW link

A study on depression

vlad.proexOct 13, 2020, 3:43 PM
21 points
1 comment9 min readLW link

Fad­ing Novelty

lifelonglearnerJul 25, 2018, 9:36 PM
26 points
2 comments6 min readLW link

Ac­cel­er­ate with­out hu­man­ity: Sum­mary of Nick Land’s philosophy

Yuxi_LiuJun 16, 2019, 3:22 AM
37 points
24 comments12 min readLW link

Eth­i­cal Inhibitions

Eliezer YudkowskyOct 19, 2008, 8:44 PM
31 points
63 comments5 min readLW link

The Wire ver­sus Evolu­tion­ary Psychology

MrShaggyMay 25, 2009, 5:21 AM
18 points
19 comments1 min readLW link

Will value of paid sex drop right be­fore the end of the world?

azamatvalievSep 2, 2023, 7:03 PM
−13 points
0 comments4 min readLW link

A The­ory of Laughter

Steven ByrnesAug 23, 2023, 3:05 PM
102 points
14 comments28 min readLW link

A Failed Just-So Story

Eliezer YudkowskyJan 5, 2008, 6:35 AM
21 points
49 comments2 min readLW link

Com­pro­mis­ing with Compulsion

matejsuchyFeb 25, 2021, 4:43 PM
4 points
1 comment8 min readLW link

Machines vs Memes Part 1: AI Align­ment and Memetics

Harriet FarlowMay 31, 2022, 10:03 PM
19 points
1 comment6 min readLW link

My take on Ja­cob Can­nell’s take on AGI safety

Steven ByrnesNov 28, 2022, 2:01 PM
72 points
15 comments30 min readLW link1 review

Break­ing the Op­ti­mizer’s Curse, and Con­se­quences for Ex­is­ten­tial Risks and Value Learning

Roger DearnaleyFeb 21, 2023, 9:05 AM
10 points
1 comment23 min readLW link

Thou Art Godshatter

Eliezer YudkowskyNov 13, 2007, 7:38 PM
237 points
83 comments5 min readLW link

Mo­ral­ity as Co­op­er­a­tion Part I: Humans

DeLesley HutchinsDec 5, 2024, 8:16 AM
5 points
0 comments19 min readLW link

5. Mo­ral Value for Sen­tient An­i­mals? Alas, Not Yet

RogerDearnaleyDec 27, 2023, 6:42 AM
33 points
41 comments23 min readLW link

Mo­ti­vat­ing Align­ment of LLM-Pow­ered Agents: Easy for AGI, Hard for ASI?

RogerDearnaleyJan 11, 2024, 12:56 PM
35 points
4 comments39 min readLW link

Good­bye, Shog­goth: The Stage, its An­i­ma­tron­ics, & the Pup­peteer – a New Metaphor

RogerDearnaleyJan 9, 2024, 8:42 PM
47 points
8 comments36 min readLW link

7. Evolu­tion and Ethics

RogerDearnaleyFeb 15, 2024, 11:38 PM
3 points
6 comments6 min readLW link

Align­ment has a Basin of At­trac­tion: Beyond the Orthog­o­nal­ity Thesis

RogerDearnaleyFeb 1, 2024, 9:15 PM
15 points
15 comments13 min readLW link

Re­quire­ments for a Basin of At­trac­tion to Alignment

RogerDearnaleyFeb 14, 2024, 7:10 AM
41 points
12 comments31 min readLW link

Dar­wi­nian Traps and Ex­is­ten­tial Risks

KristianRonnAug 25, 2024, 10:37 PM
85 points
14 comments10 min readLW link

“Arc­tic In­stincts? The uni­ver­sal prin­ci­ples of Arc­tic psy­cholog­i­cal adap­ta­tion and the ori­gins of East Asian psy­chol­ogy”—Call for Re­view­ers (Seeds of Science)

rogersbaconFeb 16, 2024, 3:02 PM
0 points
0 comments2 min readLW link

6. The Mutable Values Prob­lem in Value Learn­ing and CEV

RogerDearnaleyDec 4, 2023, 6:31 PM
12 points
0 comments49 min readLW link

Arusha Per­pet­ual Chicken—an un­likely iter­ated game

James Stephen BrownApr 6, 2025, 10:56 PM
15 points
1 comment5 min readLW link
(nonzerosum.games)

The Evolu­tion to­wards the Blank Slate

Arturo MaciasJun 10, 2024, 3:20 PM
−6 points
0 comments3 min readLW link

Toward a tax­on­omy of cog­ni­tive bench­marks for agen­tic AGIs

Ben SmithJun 27, 2024, 11:50 PM
15 points
0 comments5 min readLW link

We’re not as 3-Di­men­sional as We Think

silentbobAug 4, 2024, 2:39 PM
39 points
17 comments5 min readLW link

The “Every­one Can’t Be Wrong” Prior causes AI risk de­nial but helped pre­his­toric people

Knight LeeJan 9, 2025, 5:54 AM
1 point
0 comments2 min readLW link

Sibe­rian Arc­tic ori­gins of East Asian psy­chol­ogy

davidsunMar 16, 2025, 4:52 PM
5 points
0 comments1 min readLW link

Clar­ify­ing the free en­ergy prin­ci­ple (with quotes)

Ryo Oct 29, 2023, 4:03 PM
8 points
0 comments9 min readLW link

From No Mind to a Mind – A Con­ver­sa­tion That Changed an AI

parthibanarjuna sFeb 7, 2025, 11:50 AM
1 point
0 comments3 min readLW link

How iden­ti­cal twin sisters feel about nieces vs their own daughters

Dave LindberghFeb 9, 2025, 5:36 PM
3 points
19 comments1 min readLW link

Align­ment—Path to AI as ally, not slave nor foe

ozbMar 30, 2023, 2:54 PM
10 points
3 comments2 min readLW link
No comments.