Thou Art Godshatter
Before the 20th century, not a single human being had an explicit concept of “inclusive genetic fitness”, the sole and absolute obsession of the blind idiot god. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don’t perform a check for reproductive efficacy before granting us sexual pleasure.
Why not? Why aren’t we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? “It would have been so easy,” thinks the human, who can design new complex systems in an afternoon.
The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn’t seem to take into account anything except the number of copies a gene produces. (How strange!)
But since the maker of intelligence is thus obsessed, why not create intelligent agents—you can’t call them humans—who would likewise care purely about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn’t bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn’t eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.
It seems like such an obvious design improvement—from the Evolution Fairy’s perspective.
Now it’s clear, as was discussed yesterday, that it’s hard to build a powerful enough consequentialist. Natural selection sort-of reasons consequentially, but only by depending on the actual consequences. Human evolutionary theorists have to do really high-falutin’ abstract reasoning in order to imagine the links between adaptations and reproductive success.
But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?
It’s been less than two centuries since a protein brain first represented the concept of natural selection. The modern notion of “inclusive genetic fitness” is even more subtle, a highly abstract concept. What matters is not the number of shared genes. Chimpanzees share 95% of your genes. What matters is shared genetic variance, within a reproducing population—your sister is one-half related to you, because any variations in your genome, within the human species, are 50% likely to be shared by your sister.
Only in the last century—arguably only in the last fifty years—have evolutionary biologists really begun to understand the full range of causes of reproductive success, things like reciprocal altruism and costly signaling. Without all this highly detailed knowledge, an intelligent agent that set out to “maximize inclusive fitness” would fall flat on its face.
So why not preprogram protein brains with the knowledge? Why wasn’t a concept of “inclusive genetic fitness” programmed into us, along with a library of explicit strategies? Then you could dispense with all the reinforcers. The organism would be born knowing that, with high probability, fatty foods would lead to fitness. If the organism later learned that this was no longer the case, it would stop eating fatty foods. You could refactor the whole system. And it wouldn’t invent condoms or cookies.
This looks like it should be quite possible in principle. I occasionally run into people who don’t quite understand consequentialism, who say, “But if the organism doesn’t have a separate drive to eat, it will starve, and so fail to reproduce.” So long as the organism knows this very fact, and has a utility function that values reproduction, it will automatically eat. In fact, this is exactly the consequentialist reasoning that natural selection itself used to build automatic eaters.
What about curiosity? Wouldn’t a consequentialist only be curious when it saw some specific reason to be curious? And wouldn’t this cause it to miss out on lots of important knowledge that came with no specific reason for investigation attached? Again, a consequentialist will investigate given only the knowledge of this very same fact. If you consider the curiosity drive of a human—which is not undiscriminating, but responds to particular features of problems—then this complex adaptation is purely the result of consequentialist reasoning by DNA, an implicit representation of knowledge: Ancestors who engaged in this kind of inquiry left more descendants.
So in principle, the pure reproductive consequentialist is possible. In principle, all the ancestral history implicitly represented in cognitive adaptations can be converted to explicitly represented knowledge, running on a core consequentialist.
But the blind idiot god isn’t that smart. Evolution is not a human programmer who can simultaneously refactor whole code architectures. Evolution is not a human programmer who can sit down and type out instructions at sixty words per minute.
For millions of years before hominid consequentialism, there was reinforcement learning. The reward signals were events that correlated reliably to reproduction. You can’t ask a nonhominid brain to foresee that a child eating fatty foods now will live through the winter. So the DNA builds a protein brain that generates a reward signal for eating fatty food. Then it’s up to the organism to learn which prey animals are tastiest.
DNA constructs protein brains with reward signals that have a long-distance correlation to reproductive fitness, but a short-distance correlation to organism behavior. You don’t have to figure out that eating sugary food in the fall will lead to digesting calories that can be stored fat to help you survive the winter so that you mate in spring to produce offspring in summer. An apple simply tastes good, and your brain just has to plot out how to get more apples off the tree.
And so organisms evolve rewards for eating, and building nests, and scaring off competitors, and helping siblings, and discovering important truths, and forming strong alliances, and arguing persuasively, and of course having sex...
When hominid brains capable of cross-domain consequential reasoning began to show up, they reasoned consequentially about how to get the existing reinforcers. It was a relatively simple hack, vastly simpler than rebuilding an “inclusive fitness maximizer” from scratch. The protein brains plotted how to acquire calories and sex, without any explicit cognitive representation of “inclusive fitness”.
A human engineer would have said, “Whoa, I’ve just invented a consequentialist! Now I can take all my previous hard-won knowledge about which behaviors improve fitness, and declare it explicitly! I can convert all this complicated reinforcement learning machinery into a simple declarative knowledge statement that ‘fatty foods and sex usually improve your inclusive fitness’. Consequential reasoning will automatically take care of the rest. Plus, it won’t have the obvious failure mode where it invents condoms!”
But then a human engineer wouldn’t have built the retina backward, either.
The blind idiot god is not a unitary purpose, but a many-splintered attention. Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There’s no point in an organism with steel skin if it ends up having 1% less reproductive capacity.
Yet when the blind idiot god created protein computers, its monomaniacal focus on inclusive genetic fitness was not faithfully transmitted. Its optimization criterion did not successfully quine. We, the handiwork of evolution, are as alien to evolution as our Maker is alien to us. One pure utility function splintered into a thousand shards of desire.
Why? Above all, because evolution is stupid in an absolute sense. But also because the first protein computers weren’t anywhere near as general as the blind idiot god, and could only utilize short-term desires.
In the final analysis, asking why evolution didn’t build humans to maximize inclusive genetic fitness, is like asking why evolution didn’t hand humans a ribosome and tell them to design their own biochemistry. Because evolution can’t refactor code that fast, that’s why. But maybe in a billion years of continued natural selection that’s exactly what would happen, if intelligence were foolish enough to allow the idiot god continued reign.
The Mote in God’s Eye by Niven and Pournelle depicts an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that’s not what happened. Not here on Earth. At least not yet.
So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.
A thousand delicious tastes, matched to ancient reinforcers that once correlated with reproductive fitness—now sought whether or not they enhance reproduction. Sex with birth control, chocolate, the music of long-dead Bach on a CD.
And when we finally learn about evolution, we think to ourselves: “Obsess all day about inclusive genetic fitness? Where’s the fun in that?”
The blind idiot god’s single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I’m a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?
Being a thousand shards of desire isn’t always fun, but at least it’s not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge—tastes that judge the blind idiot god’s monomaniacal focus, and find it aesthetically unsatisfying.
And yes, we got those very same tastes from the blind idiot’s godshatter. So what?
- Dark Arts of Rationality by 19 Jan 2014 2:47 UTC; 258 points) (
- Alignment Implications of LLM Successes: a Debate in One Act by 21 Oct 2023 15:22 UTC; 247 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- Humans provide an untapped wealth of evidence about alignment by 14 Jul 2022 2:31 UTC; 211 points) (
- Alexander and Yudkowsky on AGI goals by 24 Jan 2023 21:09 UTC; 177 points) (
- The Hidden Complexity of Wishes by 24 Nov 2007 0:12 UTC; 176 points) (
- Alignment By Default by 12 Aug 2020 18:54 UTC; 174 points) (
- The Gift We Give To Tomorrow by 17 Jul 2008 6:07 UTC; 150 points) (
- Optimizing Fuzzies And Utilons: The Altruism Chip Jar by 1 Jan 2011 18:53 UTC; 139 points) (
- Full Transcript: Eliezer Yudkowsky on the Bankless podcast by 23 Feb 2023 12:34 UTC; 138 points) (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 132 points) (
- Misgeneralization as a misnomer by 6 Apr 2023 20:43 UTC; 129 points) (
- The Standard Analogy by 3 Jun 2024 17:15 UTC; 118 points) (
- Terminal Values and Instrumental Values by 15 Nov 2007 7:56 UTC; 115 points) (
- Not for the Sake of Happiness (Alone) by 22 Nov 2007 3:19 UTC; 107 points) (
- Contra shard theory, in the context of the diamond maximizer problem by 13 Oct 2022 23:51 UTC; 102 points) (
- A Master-Slave Model of Human Preferences by 29 Dec 2009 1:02 UTC; 97 points) (
- An artificially structured argument for expecting AGI ruin by 7 May 2023 21:52 UTC; 91 points) (
- Yudkowsky on AGI risk on the Bankless podcast by 13 Mar 2023 0:42 UTC; 83 points) (
- Thoughts on the Alignment Implications of Scaling Language Models by 2 Jun 2021 21:32 UTC; 82 points) (
- Conjuring An Evolution To Serve You by 19 Nov 2007 5:55 UTC; 75 points) (
- Magical Categories by 24 Aug 2008 19:51 UTC; 74 points) (
- 22 Mar 2024 3:40 UTC; 72 points) 's comment on Vernor Vinge, who coined the term “Technological Singularity”, dies at 79 by (
- Morality as Fixed Computation by 8 Aug 2008 1:00 UTC; 72 points) (
- The E-Coli Test for AI Alignment by 16 Dec 2018 8:10 UTC; 70 points) (
- The Expanding Moral Cinematic Universe by 28 Aug 2022 18:42 UTC; 64 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- Leaky Generalizations by 22 Nov 2007 21:16 UTC; 59 points) (
- The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by 13 Dec 2009 4:16 UTC; 58 points) (
- Inseparably Right; or, Joy in the Merely Good by 9 Aug 2008 1:00 UTC; 57 points) (
- The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo by 20 May 2018 7:19 UTC; 57 points) (
- Heading Toward: No-Nonsense Metaethics by 24 Apr 2011 0:42 UTC; 55 points) (
- Yudkowsky on AGI risk on the Bankless podcast by 13 Mar 2023 0:42 UTC; 54 points) (EA Forum;
- Misgeneralization as a misnomer by 6 Apr 2023 20:43 UTC; 48 points) (EA Forum;
- Making Bad Decisions On Purpose by 9 Nov 2023 3:36 UTC; 48 points) (
- Alan Carter on the Complexity of Value by 10 May 2012 7:23 UTC; 47 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- In Praise of Boredom by 18 Jan 2009 9:03 UTC; 42 points) (
- Optimization and the Singularity by 23 Jun 2008 5:55 UTC; 41 points) (
- The Natural Abstraction Hypothesis: Implications and Evidence by 14 Dec 2021 23:14 UTC; 39 points) (
- What is wisdom? by 14 Nov 2023 2:13 UTC; 37 points) (
- Is Morality Given? by 6 Jul 2008 8:12 UTC; 36 points) (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 35 points) (EA Forum;
- Value Formation: An Overarching Model by 15 Nov 2022 17:16 UTC; 34 points) (
- [Hebbian Natural Abstractions] Introduction by 21 Nov 2022 20:34 UTC; 34 points) (
- CEV: a utilitarian critique by 26 Jan 2013 16:12 UTC; 32 points) (
- Moral Complexities by 4 Jul 2008 6:43 UTC; 31 points) (
- Qualitative Strategies of Friendliness by 30 Aug 2008 2:12 UTC; 30 points) (
- 20 Jun 2014 10:29 UTC; 30 points) 's comment on Against utility functions by (
- Alexander and Yudkowsky on AGI goals by 31 Jan 2023 23:36 UTC; 29 points) (EA Forum;
- Mirrors and Paintings by 23 Aug 2008 0:29 UTC; 29 points) (
- Contra shard theory, in the context of the diamond maximizer problem by 13 Oct 2022 23:51 UTC; 27 points) (EA Forum;
- Seeking Truth Too Hard Can Keep You from Winning by 30 Nov 2021 2:16 UTC; 27 points) (
- Value drift threat models by 12 May 2023 23:03 UTC; 27 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- Emotional Involvement by 6 Jan 2009 22:23 UTC; 26 points) (
- “Go west, young man!”—Preferences in (imperfect) maps by 31 Jul 2020 7:50 UTC; 25 points) (
- Rational Health Optimization by 18 Sep 2010 19:47 UTC; 23 points) (
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 22 points) (
- Don’t Believe Wrong Things by 25 Apr 2018 3:36 UTC; 22 points) (
- Instrumental vs Terminal Desiderata by 26 Jun 2024 20:57 UTC; 21 points) (
- Hierarchical system preferences and subagent preferences by 11 Jan 2019 18:47 UTC; 21 points) (
- You don’t get to know what you’re fighting for by 17 May 2015 5:00 UTC; 20 points) (EA Forum;
- You don’t get to know what you’re fighting for by 17 May 2015 3:00 UTC; 20 points) (
- A brief tutorial on preferences in AI by 21 Feb 2012 5:29 UTC; 19 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 27 Mar 2019 19:44 UTC; 18 points) 's comment on Subagents, akrasia, and coherence in humans by (
- 27 Sep 2019 23:08 UTC; 18 points) 's comment on Honoring Petrov Day on LessWrong, in 2019 by (
- Forager Anthropology by 28 Jul 2010 5:48 UTC; 17 points) (
- Missed opportunities for doing well by doing good by 21 Jul 2010 7:45 UTC; 16 points) (
- Alignment by Default by 5 Dec 2021 2:19 UTC; 15 points) (
- Godshatter Versus Legibility: A Fundamentally Different Approach To AI Alignment by 9 Apr 2022 21:43 UTC; 15 points) (
- 30 Jan 2011 19:57 UTC; 13 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 1 Feb 2015 8:50 UTC; 13 points) 's comment on The Value Learning Problem by (
- Localized theories and conditional complexity by 19 Oct 2009 7:29 UTC; 11 points) (
- 14 Dec 2011 20:02 UTC; 11 points) 's comment on Building case-studies of akrasia by (
- Rationality Reading Group: Part L: The Simple Math of Evolution by 21 Oct 2015 21:50 UTC; 10 points) (
- 18 Jul 2024 7:17 UTC; 10 points) 's comment on Friendship is transactional, unconditional friendship is insurance by (
- [SEQ RERUN] Thou Art Godshatter by 26 Oct 2011 3:05 UTC; 9 points) (
- Against Cryonics & For Cost-Effective Charity by 10 Aug 2010 3:59 UTC; 9 points) (
- 22 Feb 2018 12:51 UTC; 9 points) 's comment on Pain, fear, sex, and higher order preferences by (
- 13 May 2018 17:08 UTC; 8 points) 's comment on AI Alignment is Alchemy. by (
- Definitions, characterizations, and hard-to-ground variables by 3 Dec 2010 3:18 UTC; 8 points) (
- [Link] Selfhood bias by 16 Jan 2013 16:05 UTC; 7 points) (
- What is the strongest argument you know for antirealism? by 12 May 2021 10:53 UTC; 7 points) (
- 26 Jul 2012 8:15 UTC; 7 points) 's comment on The curse of identity by (
- Deprecated: Some humans are fitness maximizers by 4 Oct 2022 19:38 UTC; 6 points) (
- 8 Aug 2012 20:24 UTC; 6 points) 's comment on Is lossless information transfer possible? by (
- 17 Jun 2010 2:03 UTC; 6 points) 's comment on Book Club Update and Chapter 1 by (
- 28 Feb 2009 2:20 UTC; 6 points) 's comment on The Most Important Thing You Learned by (
- 3 Jan 2015 12:27 UTC; 6 points) 's comment on MIRI’s technical research agenda by (
- Is skilled hunting unethical? by 17 Feb 2018 18:48 UTC; 6 points) (
- 25 Jan 2012 20:00 UTC; 6 points) 's comment on Urges vs. Goals: The analogy to anticipation and belief by (
- My default frame: some fundamentals and background beliefs by 10 Nov 2020 0:04 UTC; 6 points) (
- 29 Jan 2020 12:04 UTC; 5 points) 's comment on The two-layer model of human values, and problems with synthesizing preferences by (
- Crossing the experiments: a baby by 5 Aug 2013 0:31 UTC; 5 points) (
- 15 Aug 2008 2:12 UTC; 5 points) 's comment on The Bedrock of Morality: Arbitrary? by (
- 14 Jun 2009 23:30 UTC; 5 points) 's comment on Why safety is not safe by (
- 2 Apr 2022 23:28 UTC; 5 points) 's comment on MIRI announces new “Death With Dignity” strategy by (
- 11 Mar 2010 21:10 UTC; 5 points) 's comment on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity’s Future by (
- 16 Nov 2021 22:22 UTC; 4 points) 's comment on Ngo and Yudkowsky on alignment difficulty by (EA Forum;
- “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by 4 Sep 2014 16:58 UTC; 4 points) (
- 9 Nov 2010 1:15 UTC; 4 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- 17 Jan 2010 0:17 UTC; 4 points) 's comment on The Preference Utilitarian’s Time Inconsistency Problem by (
- 16 Apr 2009 18:40 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 14 Oct 2008 23:53 UTC; 4 points) 's comment on Ends Don’t Justify Means (Among Humans) by (
- Toy model of human values by 2 Nov 2010 18:28 UTC; 4 points) (
- 4 Aug 2015 8:55 UTC; 4 points) 's comment on Open thread, Aug. 03 - Aug. 09, 2015 by (
- 31 Jul 2020 22:18 UTC; 4 points) 's comment on “Go west, young man!”—Preferences in (imperfect) maps by (
- 11 Nov 2011 20:18 UTC; 4 points) 's comment on Do the people behind the veil of ignorance vote for “specks”? by (
- 10 Mar 2013 15:25 UTC; 3 points) 's comment on Don’t Get Offended by (
- 6 Mar 2009 19:54 UTC; 3 points) 's comment on Is it rational to take psilocybin? by (
- 2 Jan 2012 10:31 UTC; 3 points) 's comment on Welcome to Less Wrong! (2012) by (
- 20 Jun 2008 23:15 UTC; 3 points) 's comment on Heading Toward Morality by (
- 5 Sep 2013 2:41 UTC; 3 points) 's comment on Open thread, September 2-8, 2013 by (
- 28 May 2021 17:38 UTC; 3 points) 's comment on Sabien on “work-life” balance by (
- Could evolution produce something truly aligned with its own optimization standards? What would an answer to this mean for AI alignment? by 8 Jan 2023 11:04 UTC; 3 points) (
- 11 Nov 2011 8:05 UTC; 3 points) 's comment on Transhumanism and Gender Relations by (
- 11 Mar 2024 3:59 UTC; 2 points) 's comment on Evolution did a surprising good job at aligning humans...to social status by (
- 24 Aug 2012 15:06 UTC; 2 points) 's comment on Not for the Sake of Pleasure Alone by (
- 24 Sep 2024 2:35 UTC; 2 points) 's comment on The Sun is big, but superintelligences will not spare Earth a little sunlight by (
- 22 Feb 2014 5:49 UTC; 2 points) 's comment on White Lies by (
- 29 Nov 2024 12:12 UTC; 2 points) 's comment on How I’d like alignment to get done (as of 2024-10-18) by (
- 28 Jul 2008 4:46 UTC; 2 points) 's comment on Setting Up Metaethics by (
- 9 Sep 2011 17:38 UTC; 2 points) 's comment on Pain by (
- 5 Jul 2012 17:57 UTC; 2 points) 's comment on Less Wrong views on morality? by (
- 16 Feb 2012 20:31 UTC; 2 points) 's comment on Prediction is hard, especially of medicine by (
- 21 Apr 2012 15:46 UTC; 2 points) 's comment on Stupid Questions Open Thread Round 2 by (
- 19 Feb 2023 6:46 UTC; 2 points) 's comment on Bing chat is the AI fire alarm by (
- 9 Jul 2009 0:37 UTC; 2 points) 's comment on Revisiting torture vs. dust specks by (
- 30 Mar 2015 12:50 UTC; 1 point) 's comment on Room for Other Things: How to adjust if EA seems overwhelming by (EA Forum;
- 21 Jun 2014 7:36 UTC; 1 point) 's comment on Questions to ask theist philosophers? I will soon be speaking with several by (
- 4 May 2022 1:35 UTC; 1 point) 's comment on Various Alignment Strategies (and how likely they are to work) by (
- 4 May 2022 8:32 UTC; 1 point) 's comment on Godshatter Versus Legibility: A Fundamentally Different Approach To AI Alignment by (
- 25 Aug 2009 23:13 UTC; 1 point) 's comment on Confusion about Newcomb is confusion about counterfactuals by (
- 3 May 2022 17:41 UTC; 1 point) 's comment on David Udell’s Shortform by (
- 16 May 2010 17:36 UTC; 1 point) 's comment on The Social Coprocessor Model by (
- 1 May 2009 2:41 UTC; 1 point) 's comment on How Not to be Stupid: Adorable Maybes by (
- 28 Feb 2009 1:41 UTC; 1 point) 's comment on The Most Important Thing You Learned by (
- 10 Jan 2011 21:49 UTC; 1 point) 's comment on Deontological Decision Theory and The Solution to Morality by (
- 19 Jun 2015 19:04 UTC; 1 point) 's comment on The Meaning of Right by (
- 9 Jan 2023 8:02 UTC; 1 point) 's comment on Could evolution produce something truly aligned with its own optimization standards? What would an answer to this mean for AI alignment? by (
- 11 May 2013 11:05 UTC; 1 point) 's comment on Using Evolution for Marriage or Sex by (
- 26 Mar 2009 3:16 UTC; 1 point) 's comment on Spock’s Dirty Little Secret by (
- 24 Dec 2010 6:49 UTC; 1 point) 's comment on Two questions about CEV that worry me by (
- 3 Feb 2010 16:19 UTC; 1 point) 's comment on Deontology for Consequentialists by (
- 11 Apr 2015 7:21 UTC; 0 points) 's comment on Room for Other Things: How to adjust if EA seems overwhelming by (EA Forum;
- 3 Feb 2010 19:02 UTC; 0 points) 's comment on The Meditation on Curiosity by (
- 10 Sep 2017 17:28 UTC; 0 points) 's comment on Intrinsic properties and Eliezer’s metaethics by (
- 5 Feb 2011 2:02 UTC; 0 points) 's comment on Another Argument Against Eliezer’s Meta-Ethics by (
- 8 Jun 2008 13:57 UTC; 0 points) 's comment on Bloggingheads: Yudkowsky and Horgan by (
- 6 Jan 2013 10:16 UTC; 0 points) 's comment on Morality is Awesome by (
- 21 Apr 2014 3:49 UTC; 0 points) 's comment on AI risk, new executive summary by (
- 12 Jan 2013 23:24 UTC; 0 points) 's comment on Rationality Quotes January 2013 by (
- 12 Jun 2012 2:06 UTC; 0 points) 's comment on Welcome to Less Wrong! (2012) by (
- 13 Jul 2013 4:00 UTC; 0 points) 's comment on “Stupid” questions thread by (
- 16 Jun 2012 19:00 UTC; 0 points) 's comment on [Link] Can We Reverse The Stanford Prison Experiment? by (
- 4 Oct 2010 15:14 UTC; 0 points) 's comment on Slava! by (
- 13 Jun 2012 23:07 UTC; 0 points) 's comment on [SEQ RERUN] Heading Toward Morality by (
- 19 Jun 2015 21:48 UTC; 0 points) 's comment on The Meaning of Right by (
- 6 Jul 2008 15:20 UTC; 0 points) 's comment on Is Morality Given? by (
- 6 Jul 2012 0:12 UTC; 0 points) 's comment on Less Wrong views on morality? by (
- 4 May 2009 1:34 UTC; 0 points) 's comment on The mind-killer by (
- 30 Jan 2010 12:53 UTC; -1 points) 's comment on Complexity of Value ≠ Complexity of Outcome by (
- Less Wrong views on morality? by 5 Jul 2012 17:04 UTC; -1 points) (
- 18 Nov 2007 9:48 UTC; -5 points) 's comment on Feeling Rational by (
Godshatter? What I may or may not have shat out of my divine anus is of no concern of yours.
Signed, God (big bearded guy in the sky)
Get big enough in the beyond to come down to where I live God, I wont send you back to the slow zone or anything ;) -Pham
Eliezer, you wrote:
“Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?”
Won’t our descendants who do have genes or code that causes them to maximize their genetic fitness come to dominate the billions of galaxies. How can there be any other stable long term equilibrium in a universe in which many lifeforms have the ability to choose their own utility functions?
Genetic fitness refers to reproduction of individuals. The future will not have a firm concept of individuals. What is relevant is control of resources; this is independent of reproduction.
Furthermore, what we think of today as individuality, will correspond to information in the future. Reproduction will correspond to high mutual information. And high mutual information in your algorithms leads to inefficient use of resources. Therefore, evolution, and competition, will at least in this way go against the future correlate of “genetic fitness”.
Wow, too big an inferential distance Phil. No idea what you are tallking about here “what we think of today as individuality, will correspond to information in the future.”
Would you mind giving a few more details? Curiosity striking...
I’ve been lurking for a while, and this is my first post, but:
FTFY. Instead of asking for a single detailed story, we should ask for many simple alternative stories, no?
Obviously, this doesn’t countermand your complaint about inferential distance, which I totally agree with.
Still waiting for OP to deliver...
It’s probably just something stupid like he thinks humans will upload on computers and he thinks he knows how future society-analogues will function.
This /seems/ to contain great insight that I can’t comprehend yet. Yes, please, how do I learn to see what you see?
I’m very wary of this post for being so vague and not linking to an argument, but I’ll throw my two cents in. :)
I see two ways to interpret this:
You could see it as individuals being uploaded to some giant distributed AI—individual human minds coalescing into one big super-intelligence, or being replaced by one; or
Having so many individuals that the entire idea of worrying about 1 person, when you have 100 billion people per planet per quadrant or whatever, becomes laughable.
The common thread is that “individuality” is slowly being supplanted by “information”—specifically that you, as an individual, only become so because of your unique inflows of information slowly carving out pathways in your mind, like how water randomly carves canyons over millions of years. In a giant AI, all the varying bits that make up one human from another would get crosslinked, in some immense database that would make Jorge Luis Borges blush; meanwhile, in a civilization of huge, huge populations, the value of those varying bits simply goes down, because it becomes increasingly unlikely that you’ll actually be unique enough to matter on an individual level. So, the next bottleneck in the spread of civilization becomes resources.
This is probably my first comment on this site—feel free to browbeat me if I didn’t get my point across well enough.
For everyone who hasn’t read A Fire Upon The Deep (Vinge): Godshatter is the term he uses for a superintelligence ramming data and thought patterns into a human brain.
James, that is a problem: http://www.nickbostrom.com/fut/evolution.html
And “Thou art God” comes from Stranger in a Strange Land.
The human brain also has the complicating factor of memes and what might be called “inclusive memetic fitness.” If you hypothesize that human behavior is influenced by two different sets of selfish replicators, we could certainly have an equilibrium in which natural selection doesn’t produce behavior that maximizes the fitness of only one of them. (Incidentally, does it seem to anyone else that humans are “designed” to have unwanted pregnancies?)
(Also, “The Mote In God’s Eye” isn’t necessarily the best example. The Moties aren’t specifically motivated by the desire to maximize inclusive genetic fitness, but their biology requires them to reproduce as much as possible whether their brains think they should or not. “Protector” might be a better choice, as it describes highly intelligent individuals that have maximizing the reproductive success of their offspring as their primary conscious motivation.)
Unwanted pregnancies and ‘Unwanted pregnancies’, if one cannot tell the difference, maybe it is because it is starting to disappear. I mean, theoretically we should tend more and more towards “oops, I forgot to take my pill today” and “Oh, don’t worry, just this one time without a condom”
About the equelibrium between two sets of replicators. Awesome as it looks, it doesn’t seem feasible from a game theoretical point of view. We are not the product of two replicators, we are the product of two KINDS of replicators. Each replicator, gene or meme, is fighting his own fight, and will not necessarily coalesce only with his kind. They are not tribes fighting one another, I suggest this is an atypical occurence of Mind Projection Fallacy.
If there weren’t people who had a strong desire, not just for sex, but to actually have a child, and a willingness to go to extreme measures to do so, then sperm banks wouldn’t be a thing.
Given the number of people who specifically, and openly desire to make babies, postulating a subconscious desire that might push them to “forget” their contraception isn’t unreasonable. Especially given that cycle timing and coitus interruptus have been staples of human sexual behaviour since… Well… At least as far back as we have any records about such things. Dawn of civilization.
The two sets of replicators reminds me of an article I read about a species of birds that seems to be splitting into effectively four sexes. Male and female, but then also coloring patterns that have formed a stable loop that alternates back and forth. If the loop were unstable they’d split into two species, but it alternates generations regularly, so they keep mixing, but in a pattern of four.
Why is Eliezer so obsessed with the “high-falutin’” expression?
“Obsess all day about inclusive genetic fitness? Where’s the fun in that?” Might not our descendants evolve to consider it fun?
I agree with James Miller on the unstable equilibrium. I figure I’ll be dead by then though.
I don’t find it particular comforting that we are made of many small shards of desire, rather than a single unified desire. We like the fact that the universe is at least locally dominated by creatures who like what we like. This will always be true, that the majority will see a world of creatures mostly like themselves.
I don’t find it particular comforting that we are made of many small shards of desire, rather than a single unified desire.
It seems like this is a large part of what makes us eudaemonic agents, in Bostrom’s terminology. However, the most admired people are frequently those who display more of a deep commitment than usual to one or a few passions.
“Thou Art Godshatter”! Finally, a name for my Christian/Prog/Electronica combo!
We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don’t perform a check for reproductive efficacy before granting us sexual pleasure.
If we could barely arrange to have enough sex to cause 2 pregnancies per lifetime, then we would have a revulsion of condoms, oral sex, etc.
If for example we spent almost all of each year alone, and once a year men and women would meet on a sandy beach (or just offshore) and have sex, and that would be the only chance for women to get +regnant until the next year, men would compete intensely for the women who seem to be fertile, and the losers and the apparently-infertile women could console each other. When you only get one chance you don’t waste it.
But humans use sex for social signalling. A man who publicly explained that he intended never to have sex except when he was trying to cause a pregnancy, might find himself at a disadvantage among some women.
But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?
How are we supposed to tell? Like, we need food for energy and for building materials. When you don’t get enough energy you feel that. When you don’t get enough of some building material you feel that too, and you might learn to recognise that particular feeling. I’ve read about africans who specificly get hungry for meat, who identify the specific feeling of protein deficiency. How many others are there? Each individual amino acid? Each individual vitamin? I think instead you get rewarded by the feeling you get when there’s enough of everything and no glut of something that causes problems. And it’s up to your behavioral reinforcement system to notice foods that give you that feeling.
So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.
There maybe wasn’t much chance to overdose on sugar or far in the old days. Or salt, if you were inland. Give it a few more generations and we might do that sort of thing less. Are Pima amerindians more susceptible to the amount of sugar we eat, or do they eat more because they can? I expect that’s been tested but I don’t know the answer myself. Maybe they haven’t been exposed to sugar for as long, so they don’t have the defenses against it we do.
You can learn strategies to play chess. Wouldn’t it be nice if evolution had provided us with a chess-fitness optimiser? Instead of thinking about strategies, you just make the right move. But that requires the problem of winning at chess to be already solved.
It might very well turn out that in our future people who successfully reproduce will tend to be people who want to, and who figure out how to. There’s some of that now, asnd maybe those people wind up with fitter children than the ones who become parents through carelessness. That might be hard to measure just now. Maybe the big majority of the children come from parents who give little thought to how they’ll take care of their children, and the ones who’re cautious wind up under-reproducing as a result. In the long run the evolutionary process will note what’s worked whether we manage to measure it or not.
If we could barely arrange to have enough sex to cause 2 pregnancies per lifetime, then we would have a revulsion of condoms, oral sex, etc.
If that were true in the ancestral environment, and we had access to contraception in the AE, yes. I doubt a person who now found themselves in this situation would develop this revulsion.
I’ve read about africans who specificly get hungry for meat, who identify the specific feeling of protein deficiency.
[anecdote] Is this surprising? I’ve always been able to tell whether or not I need proteins/carbohydrates/fat (usually acting accordingly), and kind of assumed other people had the same sense. Of course, this is a fairly weak sense; calorie-rich food still tastes good when I’ve had too much, and it still takes willpower to stop eating it. [/anecdote]
TAWME, but I’m not sure if it is a consciously learned introspective behavior or something that I just picked up or developed without effort. FWIW I’ve only really noticed and acted on it for the last year or two.
What does “TAWME” mean?
“This Agrees With My Experience”.
Eliezer: poetic and informative. I like it.
@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. Save yourself some time and go read the Principia Cybernetica Web. Only after that you will be able to tread on new ground.
@Nick Tarleton, yes—avoiding a dystopia of non-eudomatic agents is a challenge.
As a chicken is a way for an egg to create another egg, I would like to ‘tell my genes to jump in a lake’, as Steven Pinker puts it, but considering so many of my preferences are in sync with my genes, I have the feeling they are very good at getting me to rationalize their preferences. I don’t think there’s intrinsic meaning in anything, but when I see connections, or patterns, in music or jokes or anything, that I haven’t noticed before, I find that meaningful, pleasurable in a way my genes can’t understand. But my love for my kids, and the meaning it gives me, clearly the gene gremlins are at work
You know, you’re getting repetitive. What does this post add to all the other related posts? “Evolutions” are stupid and slow. Okay. But I would guess that many people here want to know your thoughts about AI. I do.
@Tiiba, my paper on friendly AI theory should provide an answer to your question.
“I doubt a person who now found themselves in this situation would develop this revulsion.”
By about the second generation a lot would. They would mostly be descended from people who hadn’t used them. There is a minority that has a revulsion for condoms now. The idea of giving up practically your only change to have children, deliberately, would start seeming strange when everybody in the world had parents who hadn’t done it. Cultures change faster when that happens.
“@Tiiba, my paper on friendly AI theory should provide an answer to your question.”
I don’t see any connection between my question and your answer. At least one of us is confused.
I base friendliness (universally) on the mechanism of natural selection and claim in short “that is good what increases fitness”. You can find more on my blog at http://jame5.com
You don’t understand. I’m asking ELIEZER what he is thinking. His homepage says that he has some fresh ideas about AI that are not yet published, yet he continues to write about evolution, rehashing the same idea every day. That is what I said. I don’t even know what question you’re answering.
“Being a thousand shards of desire isn’t always fun, but at least it’s not boring.”
I like that. I have a feeling Lord Gautama would have liked it too.
I will venture to say that Eliezer’s habit (this isn’t the first instance) of teasing out the same subject again and again from slightly different angles is highly illuminating for me, at least. (And, I suspect, for him as well… though that’s conjecture).
I’m a bit slower than your average Overcoming Bias lurker, it would seem from the level of discourse here. Sometimes I think I barely grasp what everyone is even talking about, though I try to read the background links people provide. But I’m an intelligent person in general, and I have an interest in the concepts and methods Robin, Eliezer and the rest hash out in this space. You could argue that all humans do, whether they realize it or not. Either Eliezer added something new to this post, or reading post after post on this has finally hammered the point through my brain. But this evening I feel like I finally get it, and by “it” I mean merely the most basic concepts.… I grasped them abstractly right away, but more important for overcoming one’s own operating biases is really getting it in a way that will allow one to spot one’s faulty reasoning in the past and the future.
@Tiiba, trust me—I am quite certain that I do, but this is not the right forum—PM me if you want to continue off this blog.
@Elizier, you are slowly changing your point of view and are on a path to rethink old thoughts. I didn’t notice any evidence of that. He said that he had greatly changed his view in the past, but that was before he started blogging here. What have you seen since then that makes you think that?
@TGGP: This forum really is not the right place to get into details. It would not be fair towards Eliezer and that I posted something at all is an embarrassing revelation in regards to my intellectual vanity. Mea culpa.
Consider the Laestadians (look them up in Wikipedia if you haven’t heard of them). They tend to have lots of children; one TV program some years ago mentioned that families with 10 children are common among them.
Unless a lot of those children abandon (or at least modify) their parents’ faith, the future belongs to them and similar groups.
Religion can be a powerful fitness maximizer.
Alternatively, consider the various sects in history which have thought that the world was evil and therefore bringing children into it was doing them great harm. Needless to say, the majority of them seem to have died out...
@ Tiiba # 1: Without wishing to second-guess Eliezer, I’d suggest that his prolonged examination of the buggy, ad-hoc character of human intelligence may be intended to preface a discussion AI, its goals and methods. After all, the contrast with human intelliegence could be illuminating.
That missing word: “of”.
This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality… We just have far more urges wants and needs than we realise.
As many comments have suggested, now that evolution has produced creatures that can consciously seek goals, and also has instilled in some of these creatures, to some extent, the goal of bearing and raising children, all that evolution needs to do is to reinforce this desire, and in time it will manage to produce a conscious fitness maximizer.
Abandoning biology is not a way to avoid this result, since biology is not the problem, but reproduction and its historical consequences. Leaving behind biology could even speed up the process dramatically.
Maybe the alien god, despite being blind, slow, and stupid, will get its way in the end. In the distant future, intelligent fitness maximizers might laugh at the ridiculous idea, now long extinct, that it is better to have a random collection of unrelated desires for no reason except historical accident, than to seek the unified goal of fitness. After all, they’ll say, obviously nothing is worth seeking except fitness. And besides, seeking anything else is self-destructive.
If this comes to pass, what would be wrong with it? If the wrongness is only from our point of view, why should our point of view have more validity than theirs? Since we have reason to think they would be find their goal boring or miserable, we have no obvious reason to be horrified at the idea of this society.
It’s wrong because our function is different. Functions is wrong or true only for other functions.
Just as an aside, fitness maximizers usually have to accept a finite population size in a finite biome with a finite carrying capacity. There’s the possible goal of expanding into the galaxy and neighboring galaxies, but in the short run we have a finite carrying capacity.
And a fitness maximizer that is too successful has to accept it needs to preserve a lot of diversity in its gene pool or else face problems that would essentially reduce carrying capacity.
A conscious fitness maximizer at some point must realise that it survives by maintaining its numbers in a diverse population, rather than maximizing the frequency of its genes.
@ Unknown: Well, one reason why our point of view is more valid than their’s is that we exist and they don’t.
In addition, it is probably worth stressing that inclusive fitness is not, strictly speaking, the goal of anything at all. Goals only make sense relative to intentions, values and so forth—the usual accoutrements of mentality. These are all things that we humans (and perhaps some other creatures) possess, but which evolution, and our genes, do not. No minds, you see. Despite appearances.
This said, there might be something to be said for engineering or breeding descendants whose drives are more harmonious than our own. For instance, they might be happier. Still, there’s no particular reason why we should choose to make inclusive fitness the goal of all their striving, as opposed to something else.
@J Thomas, the trick lies in ensuring continued co-existence.
@Stefan: I enjoyed your book and was fascinated by your FAI perspective, but your comments here could be read as overly self-promoting, which would be counterproductive. An evil, paranoid maniac might even imagine you write comments to maximize how many links to your blog you can cram onto a page! Maybe limiting the links to yourself might curb such insanity in your audience.
@Recovering irrationalist, good points, thank you—I just wanted to save time and space by linking to relevant stuff on my blog without repeating myself over and over. My apologies for overdoing it. I guess I feel like talking to a wall or being deliberately ignored due to the lack of feedback. I shall curb my enthusiasm and let things take its course. You know where to find me.
[anecdote] Is this surprising? I’ve always been able to tell whether or not I need proteins/carbohydrates/fat (usually acting accordingly)....
sorry, guys, this wisdom-of-the-body stuff hasn’t held up that well. i’ve given the link below for a lengthy but thorough account of studies that were done on rats, for the two or fewer people here who might be interested. while there is some evidence for behavioral changes based on mineral deficiency, it’s extremely complicated and the changes in the animal’s behavior are not that “accurate” (in the sense that the animal truly seeks out the depleted nutrient). bottom line, “In many experimental situations, animals do not choose an optimal diet. This is especially the case for omnivores.” i hope this wasn’t entirely off-topic; just wanted to clean out this little rafter....
http://ajpregu.physiology.org/cgi/content/full/279/2/R357
This would explain why our formalised moral systems are either hideously complicated, or fail to capture important parts of our morality… We just have far more urges wants and needs than we realise.
Congratulations to Stuart Armstrong on nailing my hidden subtext.
(Albeit even the hideously complicated moral systems still don’t capture a fraction of our morality.)
@Tiiba: You seem to think I can just blurt out my AI ideas. I’ve tried that. It doesn’t work.
Having watched other AIfolk “explaining” their ideas, I know very well how to convince someone that you’ve just conveyed an AI theory—just pick a word like “complexity”, “emergence”, or “Bayesian” and call it the secret of the universe; or draw a big diagram full of connected boxes with suggestive names drawn from cognitive science. Unlike these other AIfolk, I’ve actually learned a little about how intelligence works, and so I know this would be unhelpful and dishonest.
Bayes is the secret of the universe, but believing this statement will not help you.
If you seek enlightenment upon this matter of AI, then I must ask whether you’ve read existing textbooks such as Machine Learning by Mitchell, Probabilistic Reasoning in Intelligent Systems, Artificial Intelligence: A Modern Approach (2nd Ed), and Elements of Statistical Learning. Recommended in that order.
@Unknown: I am horrified by the thought of humanity evolving into beings who have no art, have no fun, and don’t love one another. There is nothing in the universe that would likewise be horrified, but I am. Morality is subjectively objective: It feels like an unalterable objective fact that love is more important than maximizing inclusive fitness, and the one who feels this way is me. And since I know that goals, no matter how important, need minds to be goals in, I know that morality will never be anything other than subjectively objective.
With all that said, I hope you won’t mind if I use objective language to say:
“Evolving into obsessive replicators would be a waste of humanity’s potential. They might not mind, just as sociopaths don’t mind killing, but I mind. I will avoid such a future with every power of my intelligence.”
Brendon, you can’t expect a learning system to quickly get an exact solution to a problem in N simultaneous equations. But when improvements result in a sense of well-being, they might tend to gradually zero in on solutions. So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked.
Simpler than having thousands of individual goals programmed in.
“Being a thousand shards of desire isn’t always fun, but at least it’s not boring.”
I like that. I have a feeling Lord Gautama would have liked it too.
I alway thought the exact opposite, that Lord Gautama had a profound experience that made him relatively indifferent to the thousand shards. Specifically, a full-blown ecstatic or mystical experience is a million of times more pleasurable than any other experience the mystic has had or will have, which I always thought would make one less attached to ordinary pleasures and ordinary reinforcers. Once a religion becomes a popular movement or part of the ruling class’s justification, its leaders are tempted to modify it to broaden its appeal, which is how I always thought Buddhism acquired the habit of promising an end to suffering. One of my friends had a profound mystical experience, personally attests to the “a million of times more pleasurable”, is very scrupulous and truthful and derives no reputational benefit from having had the experience. (He says that I am practically the only one with whom he has ever discussed his experience in any detail.)
Moreover, it is my hypothesis that being indifferent to the thousand shards is a powerful enhancer of mental and moral clarity in the right conditions. One of these conditions is that the indifference be not so total or so early in onset that it extinguishes curiosity during the person’s youth, which of course is just totally pernicious in an environment as rich in true scientific information as our environment is. Another adherent of this hypothesis is academician John Stewart. Mystical experience is quite risky and dangerous; competent supervision is recommended and I would suppose is available at low or no cost to students who show high potential. Another common adverse outcome of mystical experience seem to be to make the person more confident in his beliefs, especially about the moral and political environment, and of course people tend to be too confident about their beliefs already.
if a moderator has time, could he replace my http://users.tpg.com.au/users/jes999/ with http://users.tpg.com.au/users/jes999/EvSpirit.htm
J Thomas : “So for nutrition you need sufficient energy and your body might have pre-programmed goals for repair and growth, and whatever helps meet those targets could provide that sense of well-being that announces something worked.”
this sort of system might work for thirst or even carbs and protein but would be pretty bad at things like getting you to eat balanced amounts of vitamins and minerals. for instance, your diet could be vitamin b12 poor for months or maybe longer before you would feel the pinch (your body stores the vitamin pretty well, i’m told), and i doubt you would then start ‘craving’ vitamin b12 rich foods—particularly because there had never been an appetite-satiety relationship you could have picked up on, even unconsciously.
Some new info re: evolution you might want to consider before taking the gene view of evolution to its logical conclusions:
http://www.springerlink.com/content/qh67113u60887314/ “Although we agree that evolutionary theory is not undergoing a Kuhnian revolution, the incorporation of new data and ideas about hereditary variation, and about the role of development in generating it, is leading to a version of Darwinism that is very different from the gene-centred one that dominated evolutionary thinking in the second half of the twentieth century.”
http://www.sciencedaily.com/releases/2003/09/030929054959.htm how new thinking applies to societies
Is not your second link dealt with by http://lesswrong.com/lw/iv/the_futility_of_emergence/ or am I misreading one of the two? It seems to leave the main causal mechanism abstract enough to prove anything.
That still doesn’t explain why Eliezer has been using the expression “high-falutin’” so much. Is it from some recently read book, perhaps?
Brendon, I find your reasoning plausible. I don’t know how true it is. I don’t want to give myself pernicious anemia to test it, so I’ll settle for saying it looks plausible.
If you have a vitamin deficiency, and you get a dose of the vitamin that makes you somewhat less deficient, will you feel better within a few hours? If so then it might be reinforced. On the other hand, one single experience of nerve poisoning a few hours after eating a particular new food can be enough to establish a lifelong distaste for that food.
This seems unlikely—it’s far more probable that mystical experiences are highly satisfying rather than so intensely pleasurable.
Richard:
I suppose this counts as threadjacking, but this thread seems about played out, so I’ll respond to your response to my off-topic aside.
I’m interested in what you say. I don’t think it’s necessarily off base. But my little cheeky comment was in reference to the Buddhist concept of anatta, or non-self. That is, Eliezer’s insistence that there is no purposeful unifying force behind what we experience as “our” desires reminded me of an analogous teaching of the Buddha. Evolution can be seen as a unifying force, I suppose, since it is the common wellspring of our desires, but as Eliezer is rightly at pains to point out, it is decidedly not purposeful. “A thousand shards of desire” is what we are left with.
One of the key concepts of Buddhist meditation and scholarship is that desires are ultimately independent of the desirer. [Note: I differentiate serious, classical Buddhism, which has a ridiculously large set of founding texts and canonical commentaries, from pop Buddhism. or the selective Western brand of Buddhism which takes the concepts that have appeal for people brought up in a society where the dominant religious traditions are monotheistic and authoritarian (the West, that is) while leaving behind the less sexy teachings which are in fact the core of the practice.] In the first stages of serious meditation, before you achieve any mystical bliss or whatnot, it becomes quite clear that the thoughts and desires that we take for granted as “our own” are in fact caused by specific conditions and fall away when those conditions cease. That’s the practice-based observation. The theoretical concept that springs from that is that, in fact, we build our mistaken sense of a unified “I” out of these falsely-apprehended experiences. (I say theoretical because my personal inquiries have not yet fully borne this out… perhaps they will, perhaps not… there are Buddhist scholars and monks who claim to know this to be ontologically true… I have reasons to doubt them, but I also have reasons to believe them… further inquiry is required).
Of course this teaching comes from a time before any understanding of evolutionary theory, and is practiced today by people who, broadly speaking, still don’t have any real understanding of such (yours truly included!). I don’t want to throw around too much sloppy thinking here, but I will suggest that there may be more than one angle at which to come to an understanding. Both disciplined scientific inquiry and disciplined meditational inquiry are (properly) undertaken with a desire to get at an understanding of reality while systematically eliminating misapprehensions and biases as they arise.
Anyway, all that is not to refute what you said, but to explain my comment.
I will take issue with your positing that the teachings on the end of suffering were added by later theocrats or rulers who wanted to broaden its appeal for the masses. In the oldest texts we have (written down around 2200 BC, after 300 or so years surviving in an oral tradition the fidelity of which has been shown in other contexts to be remarkable), the Buddha teaches again and again about suffering. Several places in the sutras he is quoted as saying, “I teach one thing: suffering and its end.” The teaching on the Four Noble Truths (said to be the first teaching he ever gave though admittedly that’s pretty hard to ascertain for sure) is the central teaching of the Buddhist canon. Many, many, many of the Buddha’s teachings came in for debate, abandonment and wholesale tortion as they spread to various different societies with their own cultural norms and mores and institutions and languages. But the teachings on suffering and its end are the same in Tibet as they are in Sri Lanka as they are in Japan. You might argue that the original teaching was somehow a cynical appeal to the masses (I am very much inclined to say it was not), but it’s clearly not a later corruption.
I’m very interested in parallels between the kind of ruthlessly rational inquiry displayed by the thinkers on this blog and that displayed by the early Buddhist, including the Buddha himself. I find myself looking for ways to reconcile the two. Of course, in even admitting that, I’m busting myself! If I have my desired conclusion in mind as I sift through the evidence, I have already forgotten the central teachings of Overcoming Bias! … I’ll press on though, catching myself where I can! ;)
Nit: surely you mean “220 BC,” not “2200 BC”.
I will take issue with your positing that the teachings on the end of suffering were added by later theocrats or rulers who wanted to broaden its appeal for the masses.”
I stand corrected. Thank you for your thoughtful reply.
I find myself looking for ways to reconcile the two. Of course, in even admitting that, I’m busting myself! If I have my desired conclusion in mind as I sift through the evidence, I have already forgotten the central teachings of Overcoming Bias!
Hmm. I wonder whether in ordinary cases it is okay to construct tentative models of reality at a profligate pace provided one remains sufficiently eager to revise and discard. I’m pretty sure that I derive pleasure when one of the tentative models I have constructed is destroyed by a counterexample or counter-evidence (and that this pleasure is caused by the same mechanism that causes the pleasure I get when I learn a new fact) and that that pleasure outweighs the pleasure I derive from feeling certain that I am right. In particular, I hypothesize that my early experience desensitized me to doubt including doubt about my own morality—feelings that most people who did not have my experience seem to find quite aversive.
I believe that our environment is “awash in evidence” in that most hypotheses we need to entertain to lead a very effective and very ethical life have the property that if a person ignores evidence for the hypothesis, the only thing he sacrifices is time because the mere passage of time will bring more evidence for the hypothesis. Now of course I recognize exceptions to this general observation. I am willing to believe for example that in competitive situations like military combat or wheeling and dealing in business or simply in buying and selling, the person who pays closer attention to scarce evidence can have a decisive advantage. (Hmm: these situations also seem to share the property that denying the opponent information about one’s situation is often decisive.) But in the main it remains true IMO.
In summary, the worst cognitive biases seem to me to be those in which the person is actively motivated by the human reward circuitry to ignore certain classes of evidence in a consistent manner. I propose that in comparison, merely ignoring most but not all evidence on some point and profligately building causal models on scant evidence are minor sins. Consequently, I advise paying close attention to one’s emotional responses around belief formation and belief rejection.
Since that proposition seems to contradict a point Eliezer has made several times, I will counter the possibility that I will be misunderstood by saying that I agree with him at least 98% of the time and have personally learned far more from his writings than I have from any other author since 2001, when I discovered his writings. How much to trust or to give our loyalty to our emotions might be the biggest place he and I disagree, with my maintaining that it is critical for a person who aspires to be a culture leader to ignore as much as practical species-typical emotional associations when choosing one’s beliefs and terminal values.
I advise a young person who wishes to become a mature adult who is not an arrant slave to species-typical cognitive biases to pay copious attention to what thoughts and beliefs cause him pleasure and which cause discomfort. I suggest that over the long term, if a person begins the project while still a teenager, a person has quite a bit of control over his emotional responses—can for example probably cause himself to become an adult who takes great pleasure in learning new scientific information.
Two hints on that one. First, being rewarded (with e.g. money or grades) for learning will tend to extinguish the “intrinsic” motivation to learn which is so valuable. So if you must undergo the formal educational system, be as indifferent to grades as practical. Second, the pleasure to be derived from learning or from exercising scientific or technical creativity is minor compared to the pleasure a teenager can derive from success in the popularity game that high school is famous for, sex and perhaps dominating opponents on the athletic field. If you can manage to derive most of your pleasure from learning during the critical age from about 14 to 17 -- by making a point not to develop the habit of getting your pleasure from the three more powerful reinforcers I just mentioned, then you will have gone a long way to setting yourself up for “good emotional responses” throughout your adulthood. (Before the age of 14, most people will not have sufficient executive skills to engage in such a program of “emotional shaping”, but if you think you do have the skills or if you have adults you trust helping you, I say go for it.)
Buddhist pursuits of the type Humphries engages in seems to be a fine aid to becoming a relatively-unbiased adult, particular what the Buddhists have to say about cultivating an observing self.
Let me counter the possibility I will be misunderstood by saying that I have no practical experience educating young people except what I have learned from observing myself and listening to the recollections of a handful of friends. Still so much of what I read about pedagogy strikes me as misguided that I chose to speak out.
I am threadjacking of course, but I consider it not worth the costs to try to keep the conversation in neat little boxes especially once a thread has aged for a few days. I’ll of course defer to the judgement of original poster and the owner of the blog.
Humphries and Hollerith, your comments would be too long even if they were on-topic. However you can resubmit the comments to an Open Thread, after which they will be deleted here. Thank you.
If they’re too long for this page, I suggest that they’re too long for an Open Thread, too. I have copied Humphries’ latest and my two comments to my web site and emailed Humphries with a notification of what I did (followed by an offer to delete his words from my site if that is his preference).
Deep Blue has many desires too. It knows that a knight is three times as desirable as a pawn—unless the pawn is well advanced. It knows about the value of the centre, and the importance of quiescence—and so on.
The important point to realise is that these desires all represent imperfections. They are not useful features—to be retained and deliberately implemented in future designs, but rather simple heuristics intended to deal with hardware and software limitations—and that in the future their preservation may well lead to mistakes, errors—and losses.
Also: nature gave us brains. Brains help organisms deal with a varying environment. Part of the purpose of the brain is, I claim, is to reassemble your desires into evolution’s intended “target”—a.k.a making grandchildren. The target itself cannot be built in directly—because of the limited space in the genome, and because of the varying nature of the environment.
The brain does this reassembly successfully in some individuals. They realise that high calorie foods are not good for them—because their environment is not the one their ancestors evolved in. They wake up to the idea that advertisers are trying to play the imperfections of their mind like an organ, for their own ends, and make efforts to compensate. And they understand the consequnces of the use of contraceptive devices. The in-built desires are subjugated in favour of higher level goals. If your brain has not managed such a reconstruction, you may want to consider the hypothesis that it is broken or malfunctioning—and thus to wonder if there is anything you can do to fix it.
This is an interesting and important paragraph; and it explains some things about Eliezer’s views. It’s important enough to justify. But I don’t see evidence for the idea that evolution gets more oppressive as time passes. Is this a trend in historical data? No; organisms acquire more degrees of freedom as they become more complex.
The unspoken assumption is that organisms continue to evolve, yet don’t increase in complexity—imagining humans to continue to evolve, yet without passing beyond the human stage. Perhaps moties would be the result. We see here in the US that evolution very rapidly rewards cultures and religions that forbid birth control and/or encourage large families.
On the other hand, these cultures’ and religions’ growth in number of humans does not result in an equal growth in money and power and control of resources.
I don’t have an answer; but this idea, almost skipped over, that evolution will inevitably lead to bad things, is a powerful motivator of FAI, CEV, and all such take-over-the-universe schemes. So it needs much more explication than a one-sentence reference to a Niven novel.
The question is indeed interesting, but the presumed answer is a powerful motivator for whom? Even if human evolution will lead to a super-amazing future of greatness, I doubt that future would be as super-amazing as a correctly implemented FAI; avoiding dystopian evolutionary existential catastrophes has never been listed as main reason for wanting to build a friendly really powerful optimization process by anyone I’ve talked to. Most don’t think humanity will even get that far.
But I’m curious as to what your intuitions are regarding the probably counterfactual world where humans continue evolving for a long, long time.
Eliezer has a bias against evolution, and a bias against randomness, as exhibited in his series ending in Worse than Random, which is factually correct in the details, but misleading in the real world, as demonstrated by repeated times when his acolytes have used it to attack probabilistic search, probabilistic models, etc.
My take all along has been that something about evolution has caused it to reliably make the world a more complicated, more interesting, and better place; and evolution, with randomness, is the only process that can be trusted to continue this. Any attempt to control and direct the course of change will just lock in the values of the controller.
I see E’s story about the moties as being one possible source of his bias against evolution, and hence against randomness.
Exactly. This should be obviously what we need to do.
.....
Evolution is blindly optimizing for those that produce more offspring. Eventually, those specifically aiming for this would do this more optimally than those who didn’t. Meaning that eventually only those whose main goal is to mate would dominate. Evolution marches on.
Why this has not happened before is related to the fact that there has not been human level, scheming animals on this planet earlier. Animals that can’t plan years ahead would benefit very little from having an urge towards fitness maximizing. Adaptions to be executed are what needs to be optimized and what matters vastly more on that level.
My assumption is that it isn’t really possible to take charge of evolution. You might be able to have less undirected biological evolution, but only by having memetically-driven evolution. Things are still going to have random influences.
<3
?
I’m mostly in agreement with this, but feel I must point out that from the perspective of social primate evolution the “sex only when it will result in offspring” paradigm is a perversion invented (or at least reinvented) by modern humans. Sex is primarily a bonding mechanism, as evidenced by the fact that sexual desire is mediated as much by social circumstances as by other considerations. Of course, social standing is ultimately directed at improving genetic fitness, but sex has been repurposed by the primate social system so that, essentially, it improves fitness in two ways rather than just the one you seem to be seeing. Given this, and the fact that the important number is (as some evolutionary biologists have pointed out) not the number of children born but the number of one’s children who themselves reproduce, and you have a perfectly good reason why humans in every place and time have been trying like he’ll to invent reliable birth control, for those numerous times when the “social bonding” part is desired but not the “potentially getting pregnant” part.
I agree with the thrust here, but it does seem that you’re conflating two different distinctions.
More specifically: you contrast explicit cognitive representations with implicit genetic representations (1), and it’s not always clear when you are talking about the distinction between implicit and explicit representations, and when you are talking about the difference between cognitive and genetic ones.
And it seems to matter: if I ask why my genetic representations aren’t recapitulated as cognitive ones, the kind of answer you give here is a fine one, but if I ask why my implicit representations aren’t recapitulated as explicit ones, that answer is insufficient. I am ignorant not only of what “my genes want,” but also of much of what “my brain wants,” and the stubborn implicitness of that second kind of information is not proximally due to evolution’s inability to quickly refactor code.
I don’t think any of that actually alters your main point, which is primarily about genetic vs. cognitive representations. Still, it’s worth emphasizing that not all cognitive representations are explicit ones, and there are good reasons for that over and above the genetic “godshatter” effect.
(1) I’m using “representation” here in a very loose way, admittedly.
This is possibly the best creation myth I’ve ever read. Possibly because unlike other creation myths, this one is actually true.
You’ve found amazing poetry in this grand cycle of gene warfare. But now I must wonder: How self-contained are all these desires? Will we evolve some of them to extinction? It is very hard, and somewhat disconcerting, to think of what today is human as only a passing phase on an endless continuum. Yet to assume humanity would always remain as it is seems both unrealistic, and unsatisfying—we want to see growth and novelty. So I guess I hope we will become more complex, more interesting… Rather than get narrowed toward a less fragmented sense of purpose.
It just seems the evolution has failed to build a Friendly (to the evolution) AI.
Why not become a pure reproductive consequentialist?
Reading these posts I notice a preference for altruism, utilitarianism and rejecting some of the intuitions that natural selection gave us. Moreover, almost everyone working on evolutionary psychology takes a lot of effort to avoid the naturalistic fallacy: Not confusing what is with what ought to be (see Richard Dawkins—“The Selfish Gene” or Steven Pinker—“The Blank Slate”).
Still I am wondering what is so “good” about altruism? Knowing that our preference for altruism also developed by natural selection, because it either benefits our genes in other humans (W.D. Hamilton—Kin Altruism ) or leads to reciprocal benefits for ourselfs (Robert Trivers—Reciprocal Altruism ), or it least did in small hunter-gatherer tribes in the ancestral environment. Utilitarianism is now projecting this altruism that we naturally feel towards friends and family (which was good for our genes) onto humanity as a whole (which probably isn’t). Usually there is the assumption that every human life is worth the same.
I agree that you can’t take your values from evolution, but why assume that there are any (objective) values at all? Why not embrace nihilism? Why not become a pure reproductive consequentialist?
Some practical consequences of this value system (some of them pretty weird):
Valuing your family, esp. your children more than strangers (anybody does that intuitively anyway)
Valuing your friends more than strangers (because they reciprocate; people also naturally do this)
Sacrificing your life for your children if that improves their survival more than it reduces your chance for future surviving children (you also see this very often in the real world, fathers drowning to save their children, etc.; Of course you could do the math much better than our adaptation which basically says “save drowning children”)
Switching to full altruism after your chance (or plan) to have future children has fallen to zero. Of course, still caring more about your relatives more than others (grandparents do this a lot, also Bill Gates would be an example)
Ignoring your will to have sex unless you plan to have children or it becomes distracting and reduces your ability to achieve your other goals.
Going to the sperm bank (spreading your genes and getting paid for it. That’s what I call a win-win situation.)
Avoiding fatty and sugary food, following the paleo diet. (to improve your direct fitness and sexual attractiveness)
Not having any higher moral values whatsoever. Following your moral intuitions only when they are useful to other goals.
Basically acting like Gordon Gekko from Wall Street. Only that you would try to turn your money and power into a lot of children, likely from different women. (Like the Aztec or Inca emperors which had thousands of women. Unfortunately for the inclusive fitness of today’s powerful men this has become nearly impossible. It’s better for the average man I guess.)
I am not planning to act out this slightly silly idea in my life. Still I am astonished how well it approximates what people actually do considering the change from our ancestral environment. I would like to hear your thoughts.
I was heavily thinking about this topic in the past few weeks before stumbling across this post and your comment, and I appreciate both.
Ultimately, I agree with your conclusion. What’s more, I think this (becoming a pure reproductive consequentialist) is also inevitable from the evolutionary standpoint.
It’s already clear that pure hedonistic societies (“shards of desire” et al) are on a massive decline. The collective West, with an average US fertility rate of something like 1.6 per woman, is going to die off quickly.
But the gap will be filled, and it will be filled with the programming that re-enables higher reproductive fitness.
My take, though, is that you don’t have to be radical about either of those strategies. You don’t have to maximize your fertility to the absolute best by sacrificing all joy. I think you just have to maximize it to some reasonable subjective degree. Arguably, having fun should have a positive impact on your gene propagation — as long as you efficiently propagate!
So my personal choice is to follow all the strategies from your comment and some more — except the ones that are not fun. And treat the rest of the activities (fun but pointless) as inevitable cost of slow evolution, but not blame myself for this since this is not really my fault.
This excludes sperm banks but includes maximizing offspring by various other joyous ways.
This poses some interesting challenges though. Brute-forcing the problem of limited resources to pass to your offspring, you still have the challenge of limited bonding opportunities with the mothers, which may be detrimental to the children and hurt their own reproduction (which is critical, as also mentioned in the comments).
I wonder what is the optimal number of human offspring for one male, given that at some higher numbers, further increase seems to be detrimental to the sum of group fitness.
14 years later, I notice that Eliezer missed the other reason why evolution didn’t design organisms that have fitness maximization as an explicit motivation. It’s not just that it can’t plan well enough to get there, it’s also that such a motivation would have a disadvantage compared to a set of heuristics: higher computational cost. A hypothetical mind only concerned with fitness maximization would probably have to rediscover a bunch of heuristics like “excessive pain is bad” to survive practice. (At that point, it would indeed have an advantage in that it could avoid many of the failure modes of heuristics.)
Sort of covered here (“along with”):
Reading the post I didn’t understand this:
Could evolution really build a consequentialist? The post itself kind of contradicts that.
Could a consequentialist really foresee all consequences without having any drives (such as curiosity)?
I think your critique about computational complexity is related to the 1st point.
I would submit that most other species on the planet, were they to rise to our level of intelligence, would not bother inventing condoms. In most other species, the females generally have no particular interest in sex unless they want babies.
Humans though, are weird. Because of our long phase of immaturity, and the massive amount of work involved in raising a child, we need really strong social bonds. Evolution, being a big fan of “The first thing I stumble across that gets the job done is the solution” repurposed sex into a pair-bonding trigger, and then, as our ancestors’ offspring required longer and longer care, divorced it from any specific attempt to make a baby at that particular moment.
Now fast forward to the point where infant mortality drops and churning out babies as fast as possible is no longer the best strategy. But we still need the pair bonding because the length of childhood hasn’t gotten any shorter, and it still goes way better with two sets of hands to look after the little one. Evolution would probably come up with another quick hack for this… (One might suggest that it already has in the form of oral sex.) But it will take a while. Our brains are faster.
Evolution now will simply need to favor genetics that introduce an explicit desire for children, rather than the other behaviours which used to inevitably lead to them. Which… There are a lot of people out there for whom not wanting children is a dealbreaker when looking for a potential spouse. So it seems like it’s already on top of that one too.