Adaptation-Executers, not Fitness-Maximizers
“Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.”
—John Tooby and Leda Cosmides, The Psychological Foundations of Culture.
Fifty thousand years ago, the taste buds of Homo sapiens directed their bearers to the scarcest, most critical food resources—sugar and fat. Calories, in a word. Today, the context of a taste bud’s function has changed, but the taste buds themselves have not. Calories, far from being scarce (in First World countries), are actively harmful. Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don’t complain. A scoop of ice cream is a superstimulus, containing more sugar, fat, and salt than anything in the ancestral environment.
No human being with the deliberate goal of maximizing their alleles’ inclusive genetic fitness, would ever eat a cookie unless they were starving. But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.
A toaster, though its designer intended it to make toast, does not bear within it the intelligence of the designer—it won’t automatically redesign and reshape itself if you try to cram in an entire loaf of bread. A Phillips-head screwdriver won’t reconform itself to a flat-head screw. We created these tools, but they exist independently of us, and they continue independently of us.
The atoms of a screwdriver don’t have tiny little XML tags inside describing their “objective” purpose. The designer had something in mind, yes, but that’s not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, “The purpose of the screwdriver is to drive screws”—as though this were an explicit property of the screwdriver itself, rather than a property of the designer’s state of mind. You might be surprised that the screwdriver didn’t reconfigure itself to the flat-head screw, since, after all, the screwdriver’s purpose is to turn screws.
The cause of the screwdriver’s existence is the designer’s mind, which imagined an imaginary screw, and imagined an imaginary handle turning. The actual operation of the screwdriver, its actual fit to an actual screw head, cannot be the objective cause of the screwdriver’s existence: The future cannot cause the past. But the designer’s brain, as an actually existent thing within the past, can indeed be the cause of the screwdriver.
The consequence of the screwdriver’s existence, may not correspond to the imaginary consequences in the designer’s mind. The screwdriver blade could slip and cut the user’s hand.
And the meaning of the screwdriver—why, that’s something that exists in the mind of a user, not in tiny little labels on screwdriver atoms. The designer may intend it to turn screws. A murderer may buy it to use as a weapon. And then accidentally drop it, to be picked up by a child, who uses it as a chisel.
So the screwdriver’s cause, and its shape, and its consequence, and its various meanings, are all different things; and only one of these things is found within the screwdriver itself.
Where do taste buds come from? Not from an intelligent designer visualizing their consequences, but from a frozen history of ancestry: Adam liked sugar and ate an apple and reproduced, Barbara liked sugar and ate an apple and reproduced, Charlie liked sugar and ate an apple and reproduced, and 2763 generations later, the allele became fixed in the population. For convenience of thought, we sometimes compress this giant history and say: “Evolution did it.” But it’s not a quick, local event like a human designer visualizing a screwdriver. This is the objective cause of a taste bud.
What is the objective shape of a taste bud? Technically, it’s a molecular sensor connected to reinforcement circuitry. This adds another level of indirection, because the taste bud isn’t directly acquiring food. It’s influencing the organism’s mind, making the organism want to eat foods that are similar to the food just eaten.
What is the objective consequence of a taste bud? In a modern First World human, it plays out in multiple chains of causality: from the desire to eat more chocolate, to the plan to eat more chocolate, to eating chocolate, to getting fat, to getting fewer dates, to reproducing less successfully. This consequence is directly opposite the key regularity in the long chain of ancestral successes which caused the taste bud’s shape. But, since overeating has only recently become a problem, no significant evolution (compressed regularity of ancestry) has further influenced the taste bud’s shape.
What is the meaning of eating chocolate? That’s between you and your moral philosophy. Personally, I think chocolate tastes good, but I wish it were less harmful; acceptable solutions would include redesigning the chocolate or redesigning my biochemistry.
Smushing several of the concepts together, you could sort-of-say, “Modern humans do today what would have propagated our genes in a hunter-gatherer society, whether or not it helps our genes in a modern society.” But this still isn’t quite right, because we’re not actually asking ourselves which behaviors would maximize our ancestors’ inclusive fitness. And many of our activities today have no ancestral analogue. In the hunter-gatherer society there wasn’t any such thing as chocolate.
So it’s better to view our taste buds as an adaptation fitted to ancestral conditions that included near-starvation and apples and roast rabbit, which modern humans execute in a new context that includes cheap chocolate and constant bombardment by advertisements.
Therefore it is said: Individual organisms are best thought of as adaptation-executers, not fitness-maximizers.
- Reward is not the optimization target by 25 Jul 2022 0:03 UTC; 376 points) (
- Alignment Implications of LLM Successes: a Debate in One Act by 21 Oct 2023 15:22 UTC; 247 points) (
- The True Prisoner’s Dilemma by 3 Sep 2008 21:34 UTC; 232 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Levels of Action by 14 Apr 2011 0:18 UTC; 186 points) (
- Specializing in Problems We Don’t Understand by 10 Apr 2021 22:40 UTC; 174 points) (
- Selection vs Control by 2 Jun 2019 7:01 UTC; 172 points) (
- Value is Fragile by 29 Jan 2009 8:46 UTC; 170 points) (
- The Gift We Give To Tomorrow by 17 Jul 2008 6:07 UTC; 150 points) (
- The Power of Agency by 7 May 2011 1:38 UTC; 112 points) (
- Evolutionary Psychology by 11 Nov 2007 20:41 UTC; 103 points) (
- A Theory of Laughter by 23 Aug 2023 15:05 UTC; 102 points) (
- Behaviorism: Beware Anthropomorphizing Humans by 4 Jul 2011 20:40 UTC; 89 points) (
- New User’s Guide to LessWrong by 17 May 2023 0:55 UTC; 88 points) (
- Scarcity by 27 Mar 2008 8:07 UTC; 87 points) (
- Disentangling Shard Theory into Atomic Claims by 13 Jan 2023 4:23 UTC; 86 points) (
- Stop Voting For Nincompoops by 2 Jan 2008 18:00 UTC; 76 points) (
- An Especially Elegant Evpsych Experiment by 13 Feb 2009 14:58 UTC; 75 points) (
- The Two-Party Swindle by 1 Jan 2008 8:38 UTC; 74 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- Non-superintelligent paperclip maximizers are normal by 10 Oct 2023 0:29 UTC; 67 points) (
- Why Does Power Corrupt? by 14 Oct 2008 0:23 UTC; 63 points) (
- You’re in Newcomb’s Box by 5 Feb 2011 20:46 UTC; 59 points) (
- Thinking and Deciding: a chapter by chapter review by 9 May 2012 23:52 UTC; 57 points) (
- Book Review: The Elephant in the Brain by 31 Dec 2017 17:30 UTC; 55 points) (
- The Cognitive Costs to Doing Things by 2 May 2011 9:13 UTC; 49 points) (
- Do humans derive values from fictitious imputed coherence? by 5 Mar 2023 15:23 UTC; 45 points) (
- Book Review: Cognitive Science (MIRI course list) by 9 Sep 2013 16:39 UTC; 43 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- In Praise of Boredom by 18 Jan 2009 9:03 UTC; 42 points) (
- Three Fallacies of Teleology by 25 Aug 2008 22:27 UTC; 38 points) (
- [ASoT] Consequentialist models as a superset of mesaoptimizers by 23 Apr 2022 17:57 UTC; 38 points) (
- 22 May 2022 7:02 UTC; 37 points) 's comment on bgarfinkel’s Quick takes by (EA Forum;
- Is Morality Given? by 6 Jul 2008 8:12 UTC; 36 points) (
- Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems. by 26 May 2023 21:25 UTC; 32 points) (EA Forum;
- Engaging First Introductions to AI Risk by 19 Aug 2013 6:26 UTC; 31 points) (
- Ethical Inhibitions by 19 Oct 2008 20:44 UTC; 31 points) (
- Continuous Improvement by 11 Jan 2009 2:09 UTC; 29 points) (
- Let Your Workers Gather Food by 24 Oct 2011 19:51 UTC; 29 points) (
- Don’t Look Up (Film Review) by 27 Dec 2021 20:36 UTC; 27 points) (
- The Bedrock of Morality: Arbitrary? by 14 Aug 2008 22:00 UTC; 25 points) (
- 24 Mar 2011 22:29 UTC; 24 points) 's comment on Crime and punishment by (
- Without models by 4 May 2009 11:31 UTC; 23 points) (
- Is “Strong Coherence” Anti-Natural? by 11 Apr 2023 6:22 UTC; 23 points) (
- The enemy within by 5 Jul 2009 15:08 UTC; 22 points) (
- program searches by 5 Sep 2022 20:04 UTC; 21 points) (
- 27 Apr 2010 0:40 UTC; 21 points) 's comment on Only humans can have human values by (
- 20 Apr 2011 21:38 UTC; 18 points) 's comment on No, Seriously. Just Try It. by (
- 22 May 2009 13:03 UTC; 17 points) 's comment on Least Signaling Activities? by (
- 18 Jul 2013 7:41 UTC; 17 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95 by (
- 11 Dec 2012 11:12 UTC; 17 points) 's comment on How to Avoid the Conflict Between Feminism and Evolutionary Psychology? by (
- Some real examples of gradient hacking by 22 Nov 2021 0:11 UTC; 15 points) (
- 2 Jan 2012 19:58 UTC; 15 points) 's comment on The Importance of Goodhart’s Law by (
- 3 Nov 2011 23:05 UTC; 14 points) 's comment on Open thread, November 2011 by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part1) by 20 Feb 2013 9:09 UTC; 13 points) (
- 28 Jun 2010 0:02 UTC; 13 points) 's comment on Unknown knowns: Why did you choose to be monogamous? by (
- Homeostatic Bruce by 8 Apr 2021 19:13 UTC; 13 points) (
- Mental Rebooting: “Your Brain on Porn”… by 15 Oct 2011 17:14 UTC; 13 points) (
- The evaluation function of an AI is not its aim by 10 Oct 2021 14:52 UTC; 13 points) (
- 20 May 2015 14:56 UTC; 12 points) 's comment on [Link] A Darwinian Response to Sam Harris’s Moral Landscape Challenge by (
- What is moral foundation theory good for? by 12 Aug 2012 5:03 UTC; 11 points) (
- 23 Jul 2012 20:56 UTC; 11 points) 's comment on Evolutionary psychology as “the truth-killer” by (
- 8 Feb 2010 21:39 UTC; 11 points) 's comment on A survey of anti-cryonics writing by (
- Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic? by 27 Mar 2020 15:24 UTC; 10 points) (EA Forum;
- 17 Mar 2012 13:15 UTC; 10 points) 's comment on The Futility of Intelligence by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- Rationality Reading Group: Part L: The Simple Math of Evolution by 21 Oct 2015 21:50 UTC; 10 points) (
- From “Coulda” and “Woulda” to “Shoulda”: Predicting Decisions to Minimize Regret for Partially Rational Agents by 16 Jun 2014 21:02 UTC; 10 points) (
- Biomimetic alignment: Alignment between animal genes and animal brains as a model for alignment between humans and AI systems by 8 Jun 2023 16:05 UTC; 10 points) (
- [SEQ RERUN] Adaptation-Executors, not Fitness Maximizers by 23 Oct 2011 2:55 UTC; 9 points) (
- 13 Apr 2015 20:10 UTC; 9 points) 's comment on Are there really no ghosts in the machine? by (
- 3 Feb 2008 18:29 UTC; 9 points) 's comment on The “Intuitions” Behind “Utilitarianism” by (
- 3 Feb 2015 9:46 UTC; 8 points) 's comment on Open Thread, Feb. 2 - Feb 8, 2015 by (
- The Irresistible Attraction of Designing Your Own Utopia by 1 Apr 2022 1:34 UTC; 8 points) (
- 14 Sep 2013 14:26 UTC; 8 points) 's comment on The Up-Goer Five Game: Explaining hard ideas with simple words by (
- Evolution is an observation, not a process by 6 Feb 2024 14:49 UTC; 8 points) (
- Silly Online Rules by 8 Jun 2022 20:40 UTC; 8 points) (
- 9 Feb 2021 19:59 UTC; 8 points) 's comment on Why I Am Not in Charge by (
- 19 Jul 2009 23:33 UTC; 8 points) 's comment on Sayeth the Girl by (
- 24 Dec 2010 8:53 UTC; 8 points) 's comment on Two questions about CEV that worry me by (
- Accidental Optimizers by 22 Sep 2021 13:27 UTC; 7 points) (
- 29 Aug 2012 21:37 UTC; 7 points) 's comment on Is felt preference a useful indicator? (A ramble) by (
- 29 Aug 2023 6:52 UTC; 6 points) 's comment on Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong by (EA Forum;
- 21 Jan 2015 4:40 UTC; 6 points) 's comment on Open thread, Jan. 19 - Jan. 25, 2015 by (
- 21 Oct 2011 0:11 UTC; 6 points) 's comment on Greg Linster on the beauty of death by (
- 18 Jul 2011 3:33 UTC; 6 points) 's comment on Ego syntonic thoughts and values by (
- 19 Dec 2020 16:56 UTC; 6 points) 's comment on Open & Welcome Thread—December 2020 by (
- 30 Sep 2014 2:09 UTC; 6 points) 's comment on Open thread, Sept. 29 - Oct.5, 2014 by (
- 29 Aug 2013 2:33 UTC; 6 points) 's comment on Yet more “stupid” questions by (
- 12 Mar 2014 22:41 UTC; 5 points) 's comment on The Problem with AIXI by (
- 19 Aug 2009 2:27 UTC; 5 points) 's comment on Scott Aaronson’s “On Self-Delusion and Bounded Rationality” by (
- Evolution, bias and global risk by 23 May 2011 0:32 UTC; 5 points) (
- 12 Dec 2012 0:37 UTC; 5 points) 's comment on By Which It May Be Judged by (
- 23 Mar 2011 18:02 UTC; 4 points) 's comment on Positive Thinking by (
- 26 May 2010 0:57 UTC; 4 points) 's comment on Open Thread: May 2010, Part 2 by (
- 5 Sep 2011 21:02 UTC; 4 points) 's comment on Open Thread: September 2011 by (
- 15 Mar 2009 0:20 UTC; 4 points) 's comment on Closet survey #1 by (
- 14 Jan 2015 1:19 UTC; 4 points) 's comment on Superintelligence 18: Life in an algorithmic economy by (
- 17 Dec 2014 0:45 UTC; 4 points) 's comment on “incomparable” outcomes—multiple utility functions? by (
- 14 Mar 2011 6:33 UTC; 4 points) 's comment on What exactly IS the overpopulation argument (in regards to immortality)? by (
- 6 Aug 2009 21:33 UTC; 4 points) 's comment on Why Real Men Wear Pink by (
- 28 Mar 2021 18:55 UTC; 4 points) 's comment on Toward A Bayesian Theory Of Willpower by (
- 2 Feb 2011 0:41 UTC; 3 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- 8 May 2009 8:03 UTC; 3 points) 's comment on Without models by (
- Sexual Selection as a Mesa-Optimizer by 29 Nov 2024 23:34 UTC; 3 points) (
- 20 Oct 2021 8:38 UTC; 3 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- Do you like excessive sugar? by 9 Oct 2021 10:40 UTC; 3 points) (
- 8 Mar 2012 20:03 UTC; 3 points) 's comment on Making computer systems with extended Identity by (
- 20 Mar 2012 22:54 UTC; 3 points) 's comment on Calibrate your self-assessments by (
- 27 Mar 2015 10:08 UTC; 3 points) 's comment on Welcome to Less Wrong! (7th thread, December 2014) by (
- 28 Apr 2022 23:42 UTC; 3 points) 's comment on Beyond Blame Minimization by (
- 26 Jul 2022 22:07 UTC; 3 points) 's comment on Reward is not the optimization target by (
- 10 May 2009 17:04 UTC; 3 points) 's comment on You Are A Brain by (
- 26 Mar 2012 0:46 UTC; 3 points) 's comment on Consolidated Nature of Morality Thread by (
- 11 Mar 2024 3:59 UTC; 2 points) 's comment on Evolution did a surprising good job at aligning humans...to social status by (
- Humans don’t generally have utility functions by 14 Nov 2010 9:54 UTC; 2 points) (
- 6 Jul 2011 9:10 UTC; 2 points) 's comment on Behaviorism: Beware Anthropomorphizing Humans by (
- 5 Jan 2012 21:39 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 14 Feb 2014 7:36 UTC; 2 points) 's comment on White Lies by (
- 18 Jan 2013 20:16 UTC; 2 points) 's comment on Policy Debates Should Not Appear One-Sided by (
- 19 May 2019 22:29 UTC; 2 points) 's comment on Comment section from 05/19/2019 by (
- 18 Aug 2014 7:25 UTC; 2 points) 's comment on Confused as to usefulness of ‘consciousness’ as a concept by (
- 2 Apr 2012 21:40 UTC; 2 points) 's comment on George Orwell’s Prelude on Politics Is The Mind Killer by (
- 3 Dec 2010 8:18 UTC; 2 points) 's comment on Definitions, characterizations, and hard-to-ground variables by (
- 2 Feb 2010 14:44 UTC; 1 point) 's comment on The Meditation on Curiosity by (
- 11 Dec 2017 13:28 UTC; 1 point) 's comment on The list by (
- 3 Apr 2019 18:55 UTC; 1 point) 's comment on Parable of the flooding mountain range by (
- 26 Jan 2014 7:20 UTC; 1 point) 's comment on Rationalists Are Less Credulous But Better At Taking Ideas Seriously by (
- 12 Sep 2012 20:27 UTC; 1 point) 's comment on Rationality Quotes September 2012 by (
- 17 Jul 2024 17:30 UTC; 1 point) 's comment on weightt an’s Shortform by (
- 23 Feb 2023 12:36 UTC; 1 point) 's comment on The male AI alignment solution by (
- 21 Jul 2016 10:36 UTC; 1 point) 's comment on The map of cognitive biases, errors and obstacles affecting judgment and management of global catastrophic risks by (
- 19 Jul 2013 22:07 UTC; 1 point) 's comment on The idiot savant AI isn’t an idiot by (
- A Short Intro to Humans by 20 Jul 2022 15:28 UTC; 1 point) (
- 5 Oct 2021 14:19 UTC; 1 point) 's comment on What makes us happy and depressed? by (
- 4 May 2012 8:52 UTC; 1 point) 's comment on Rationality Quotes May 2012 by (
- 20 May 2015 15:31 UTC; 1 point) 's comment on [Link] A Darwinian Response to Sam Harris’s Moral Landscape Challenge by (
- 28 Dec 2011 1:10 UTC; 0 points) 's comment on [SEQ RERUN] Zut Allais! by (
- 2 Jan 2009 23:46 UTC; 0 points) 's comment on The Uses of Fun (Theory) by (
- 28 Sep 2013 13:51 UTC; 0 points) 's comment on Ketogenic Soylent by (
- 7 Sep 2015 22:52 UTC; 0 points) 's comment on Serious Stories by (
- 8 Sep 2015 17:33 UTC; 0 points) 's comment on Serious Stories by (
- 26 May 2009 20:32 UTC; 0 points) 's comment on Homogeneity vs. heterogeneity (or, What kind of sex is most moral?) by (
- 11 Nov 2007 6:56 UTC; 0 points) 's comment on The Tragedy of Group Selectionism by (
- 25 Jan 2014 23:07 UTC; 0 points) 's comment on Cynical About Cynicism by (
- 20 Jun 2011 21:18 UTC; 0 points) 's comment on Why No Wireheading? by (
- 1 Dec 2012 5:15 UTC; 0 points) 's comment on A definition of wireheading by (
- 9 Aug 2013 10:50 UTC; 0 points) 's comment on The Apocalypse Bet by (
- 6 Jul 2008 20:41 UTC; 0 points) 's comment on Is Morality Given? by (
- 15 Jun 2013 6:44 UTC; 0 points) 's comment on Rationality Quotes June 2013 by (
- 1 Sep 2012 19:26 UTC; 0 points) 's comment on How do we really escape Prisoners’ Dilemmas? by (
- 2 Aug 2013 0:39 UTC; 0 points) 's comment on Rationality Quotes July 2013 by (
- 28 Sep 2013 20:13 UTC; 0 points) 's comment on What makes us think _any_ of our terminal values aren’t based on a misunderstanding of reality? by (
- 26 Sep 2012 12:19 UTC; -1 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 23 Aug 2011 22:42 UTC; -2 points) 's comment on Secrets of the eliminati by (
Would this also explain why the use of birth control is so popular?
“What is the meaning of eating chocolate? That’s between you and your moral philosophy. Personally, I think chocolate tastes good, but I wish it were less harmful; acceptable solutions would include redesigning the chocolate or redesigning my biochemistry”
Indulging in sumptuous wonderful cacao products—this is the meaning of eating chocolates—and flavonides come as a free gift. Accepting chocolate as a treat, allowing us to be sort of hedonistic, is far better than a form of Calvinist ‘adaptation method’ of redesigning both chocolate or biochemistry until all fits a theory that might turn out to be wrong anyway. Who can guarantee that chocolate won’t become a super food in near future?
The atoms of a screwdriver don’t have tiny little XML tags inside describing their “objective” purpose.
Not yet, but those atoms probably will be tagged in XML with the designer’s intent fairly soon. Also the user manual, credits, bill of materials and sourcing, recycling instructions, links to users groups and issue repositories, etc., etc. It obviously doesn’t change your argument, but I do wonder how our cognitive biases will be affected when everything is tagged with intent and history, crosslinked and searchable. I guess we’ll find out soon enough.
A long time ago I read a newspaper article which claimed that a Harvard psychological research project had women chew up chocolate and spit it out, while looking in a mirror and connected to some sort of electrodes. They claimed that after that the women didn’t like chocolate much.
I tried it without the electrodes. I got a 2 pound bag of M&Ms. I usually didn’t buy M&Ms because no matter how many I got they’d be gone in a couple of days. I started chewing them and spitting them out. Every now and then I’d rinse out my mouth with water and the flavor would be much more intense after that. I got all the wonderful taste of the M&Ms but I didn’t swallow.
I did that for 15 minutes a day for 3 days. After that I didn’t much like chocolate, and it took more than a year before I gradually started eating it again.
I think the esthetic pleasure of chocolate must have a strong digestive component.
Another possibility is that there’s something about chewing things and spitting them out that tends to make them less appealing. (E.g., the whole thing looks and feels kinda gross; or you associate spitting things out with finding them unpleasant—normally if you spit something out after starting to eat it it’s because it tastes unpleasant or contains unpleasant gristle or something like that.)
Most of our taste buds are actually in the part of the tongue that food only reaches after swallowing.
I’d hazard a guess that this is also where most of the positive reinforcement circuitry eventually happens, but that might be inferring too much based on what I know. I wish I had a psychoanatomy textbook handy. It might also be that the negative reinforcement circuitry happens mostly on the pre-swallow taste buds, which would handily explain your temporary aversion to chocolate -and- the “taste test” phenomenon wherein humans taste something once and, prior to swallowing, proclaim a permanent dislike of that flavor.
A caution: anyone who reads this comment should not take either J_Thomas’s hypothesis or mine as actual evidence. I provided one to illustrate just how reasonable the exact opposite of what he said sounded, i.e., that nothing about digestion provides reinforcement.
Seth Roberts’ diet was really about this insight.
https://en.wikipedia.org/wiki/The_Shangri-La_Diet
Who can guarantee that chocolate won’t become a super food in near future?
So redesign the human taste system to measure how much of each nutrient you have and how much you need, including micronutrients formerly reliably common in the ancestral environment, and macronutrients formerly reliably scarce. Then it will function fine even after civilization collapses. Evolutions are stupid.
I think the esthetic pleasure of chocolate must have a strong digestive component.
Seth Roberts would agree with you. I don’t think he’s written about that particular experiment, but it confirms his basic argument on flavor-calorie association.
This is the distinction Daniel Dennett makes between the intentional stance and the design stance. I consider it a useful one. He also distinguishes the physical stance, which you touch on.
It turns out that much chocolate is produced with exploited child slave labor (also here, a more business-friendly article). That is a new meaning of eating chocolate, very sad for me since I love the stuff. I’m trying to transition to fair trade products.
Re: “Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers”.
It’s a bit like saying deep blue is an instruction executor, not an expected chess position utility maximizer.
The statement muddles up the “why” and “how” levels of explanation.
Executing instructions are how chess programs go about maximizing expected chess position utility.
Of course organisms cannot necessarily maximise their fitnesses—rather they attempt to maximise their expected fitness, just like other expected utility maximisers.
Tooby and Cosmides go on to argue the even more confused thesis:
“[Goals such as “maximize your fitness” or “have as many offspring as possible”] are probably impossible to instantiate in any computational system.”
Re: “Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers”.
It’s a bit like saying deep blue is an instruction executor, not an expected chess position utility maximizer.
Not really. Deep Blue’s programming is so directly tied to winning chess, maximizing the value of its position is definitely what it “intends”. It actually “thinks about” how well it’s doing in this regard.
Living things, on the other hand, are far from explicit fitness maximizers. Evolution has given them behaviours that, in most natural circumstances, are fairly good at helping their genes. But in unusual circumstances they may well do things that are totally useless.
Humans today, for example, totally fail to maximize their fitness, e.g. by choosing to have just a small family and using contraception. We’re in an unusual situation—evolution knew nothing about condoms.
Re: Living things, on the other hand, are far from explicit fitness maximizers
Thus the point about organisms maximising their expected fitness. Organisms really do maximise their expected fitness—just like all other expected fitness maximisers. It’s just that their expectations may not be a good match for reality.
That is true even of Deep Blue. Its chess simulation is not the same as the real world of chess. It is living in the environment it was “designed” for—but it is resource-limited, and its program is sub-optimal. So its expectations too may be wrong. It can still lose.
As far as I can tell, the idea that organisms maximising their actual fitnesses is a ridiculous straw man erected by Tooby and Cosmides for nefarious rhetorical purposes of their own. Nobody ever actually thought that.
What about the idea that organisms are maximising something different—say expected happiness—rather than expected fitness, and these days the two can be divorced—e.g. by drugs? Again, much the same is equally true of Deep Blue—all expected fitness maximisers represent their expected fitness internally by some representation of it, and then maximise that representation.
Organisms really are well thought of as maximising their expected fitness—under the limited resource constraints. They are, after all the product of a gigantic optimisation process whose utility function favours effective expected fitness maximisers. It’s just that sometimes the expectations of the organisms are not a good match for reality.
Re: condoms—barrier contraceptives do not necessarily reduce inclusive fitness. They allow people to have sex who would not normally risk doing so. They allow families to compete better in more K-selected environments, by helping them to devote their resouces to a smaller number of higher quality offspring. Of course they can also be used to sabotage your genetic program, but that is not their only use.
Thus the point about organisms maximising their expected fitness. Organisms really do maximise their expected fitness—just like all other expected fitness maximisers. It’s just that their expectations may not be a good match for reality.
What do the words “expected” and “expectations” mean in this context?
“Expected fitness” isn’t a term I’m familiar with. But we’re talking about organisms that are either not conscious, or are not consciously thinking about fitness. It can’t mean “expected” in the normal sense, and so I need an explanation.
Deep Blue is not conscious either—yet it still predicts possible future chess positons, and makes moves based on its expectation of their future payoff.
Take the term as a behaviourist would. Organisms have sensors, actuators, and processing that mediates between the two. If they behave in roughly the same way as an expected fitness maximiser would if given their inputs, then the name fits.
Deep Blue is not conscious either—yet it still predicts possible future chess positons, and makes moves based on its expectation of their future payoff.
Yes indeed, which is why I think it’s much easier to consider it a utility maximiser than organisms are. It explicitly “thinks about” the value of its position and tries to improve it. Organisms don’t. They just carry out whatever adaptations evolution has given them.
Take the term [expected fitness maximiser] as a behaviourist would. Organisms have sensors, actuators, and processing that mediates between the two. If they behave in roughly the same way as an expected fitness maximiser would if given their inputs, then the name fits.
But I don’t know how a behaviourist would take it. It’s not a term I’m familiar with.
From looking through Google hits, it seems that “expected fitness” is analogous to the “expected value” of a bet, and means “fitness averaged across possible futures”—but organisms don’t maximise that, because they often find themselves in situations where their strategies are sub-optimal. They often make bad bets.
(Deep Blue isn’t a perfect utility maximiser either, of course, since it can’t look far enough ahead. Only a perfect player would be a true maximiser.)
The concept of “expected fitness” is often used by biologists to counter the claim that “survival of the fittest” is a tautology. There, the expectation is by the biologist, who looking at the organism, attempts to predict its fitness in some specified environment.
An expected fitness maximiser is just an expected utility maximiser, where the utility function is God’s utility function.
If you put such an entity in an unfamiliar environment—so that it doesn’t work very well—it doesn’t normally stop being an expected utility maximiser. If it still works at all, it probably still tries to choose actions that maximise its expected utility. It’s just that its expectations may not necessarily be a good match for reality.
Considering organisms as maximising their expected fitness is the central mode of explanation in evolutionary biology. Most organisms really do behave as though they are trying to have as many descendants as possible, given their limitations and the information they have available to them. That the means by which they do this involves something akin to executing instructions does not detract in any way from this basic point—nor is it refuted by the placing of organisms in unfamiliar environments, where their genetic program does not have the desired effect.
I am not clear about your claim that Deep Blue thinks, but organisms do not. Are you ignoring animals? Animals have brains which think—often a fair bit more sophisticated than the thoughts Deep Blue thinks.
An expected fitness maximiser is just an expected utility maximiser, where the utility function is God’s utility function.
I searched Google for “expected utility maximiser” and the 6th hit was your own website:
An expected utility maximiser is a theoretical agent who considers its actions, computes their consequences and then rates them according to a utility function.
The typical organism just doesn’t do this. I think you’d have a hard time arguing that even a higher mammal does this.
I am not clear about your claim that Deep Blue thinks, but organisms do not. Are you ignoring animals?
I didn’t say organisms don’t think. I said they don’t think about their fitness. They think about things like surviving, eating, finding mates, and so on, all of which usually contribute to reproduction in a natural environment.
The proof of this really is the way that a great many humans have indeed rebelled against their genes, and knowingly choose not to maximise their fitness. Dawkins, for example, has only one child. As a high-status male, he could presumably have had many more.
Hmm. If your intention is to stress that, in many cases, organisms behave as if they were fitness maximisers, then yes, I see your point. But it’s important to bear in mind that there are other cases where they don’t behave “correctly”—because they’re executing sub-optimal adaptations.
Tim, I hate to be rude, but I think this is just silly. There are a nontrival number of people who deliberately refrain from having children. To the extent that your theory can explain them, it can explain anything.
If you’re careful about how you define utility, you can probably “explain” any actions with expected utility theory. It’s trivial; it’s an abuse of the formalism; it’s arguing by definition.
Re: An expected utility maximiser is a theoretical agent who considers its actions, computes their consequences and then rates them according to a utility function … I think you’d have a hard time arguing that even a higher mammal does this.
Real organisms are imperfect approximations to expected utility maximisers—but they really do act rather a lot like this. For example see the work of Jeff Hawkins on the role of prediction in brain function.
There’s relevant work by von Neumann and Morgenstern that suggests that all economic actors can be modelled as rational economic agents maximising some utility function—regardless of the details of their internal operation—with the caveat that any deviations from this model results in agents which are vulnerable to burning up their resources for no personal benefit under some circumstances—and in the case of evolution, it is likely that such vulnerabilities would either crop up rarely, or be selected against.
Of course organisms without brains have relatively little look-ahead. They are limited to computations that can be produced by their cells—which are still sophisticated computation devices, but not really on the same scale as a whole brain. The “expectations” of plants are mostly that the world is much the same as the one its ancestors experienced.
Re: organisms executing “unsuitable” adaptations...
It can certainly happen. But brains exist partly to help adapt to the effects of environmental fluctuations—and prevent unfamiliar environments from breaking the genetic program. Of course some organisms will still fail. Indeed, most male organisms will fail—even with an environment that is the expected one. That’s just how nature operates.
@Z. M. Davis:
As I have said, the idea that organisms typically act to maximise their inclusive fitness—to the best of their understanding and ability—is a central explanatory principle in evolutionary biology.
That some organisms fail to maximise their actual fitness—due to mutations, due to being in an unfamiliar environment, due to resource limitations, or due to bad luck is not relevant evidence against this idea.
The Tooby and Cosmides dichotomy between Adaptation-Executers and Fitness-Maximizers that this blog post is about is a mostly a false one—based on muddling up “how” and “why” levels of explanation. Maximising their expected fitness is why organisms behave as they do. Executing adaptations is how they do it. These different types of explanations are complimentary, and are not mutually-exclusive.
Right, it’s not a dichotomy—the two explanations aren’t mutually exclusive. But it’s still an extremely relevant distinction—at least for those of us who are interested in the organisms themselves, rather than solely in the unconscious, abstract optimization process that created them.
Sure, I get the point. Humans are products of natural selection, so anything any human does can be seen as the result of selection pressures favoring behaviors that resulted in increased fitness in the EEA. There is some sense of the words in which you could look at someone who is, say, committing suicide (before having reproduced), and say: “What she’s really doing here is attempting to maximize her expected inclusive fitness!”
It’s not wrong so much as it is silly. The point of the post is that the organisms themselves don’t actually care about fitness. You can give a fitness-based account of why the organisms want what they actually do want. But so what? When we’re not talking about evolutionary biology, why should we care? You might as well say (I’m inspired here by a Daniel Dennett quote which I can’t locate at the moment) that no organism really maximizes expected fitness; they actually just follow the laws of physics. Well … okay, sure, but it’s silly to say so. You have to use the right level of explanation for the right situation.
ADDENDUM: Maybe this phrasing will help:
To say that an organism is “trying to maximize expected fitness,” applies in a broad sense to all evolved creatures, and as such is compatible with anything that any evolved creature does, including obviously fitness-reducing acts. In this broad sense, the “trying to maximize expected fitness” theory does a poor job of constraining anticipations compared to the theory that makes reference to the actual explicitly-represented goals of the organism in question. If we interpret “trying to maximize expected fitness” in a narrower sense in which organisms explicitly try to gain fitness, then it is obviously false (see, e.g., teenage suicides, women who have abortions when they could put the baby up for adoption, &c., &c.).
Re: The point of the post is that the organisms themselves don’t actually care about fitness
Most of them certainly act as though they do. Kick someone in the testicles, steal their girlfriend, threaten their son, or have sex with their wife and observe the results.
Of course people don’t always profess to caring about their own fitness. Rather many profess to be altruists. That is an expected result of wishing to appear altruistic—rather than selfish—to others. Indeed, people are often good at detecting liars and are poor at deception—and the best way of appearing to be an altruist is to believe it yourself, and then use doublethink to rationalise away any selfish misdeads. So don’t expect to be able to access your actual motives through introspection. Consciousness is part of the brain’s PR department—not a hotline to its motive system.
Re: teenage suicides
Adaptive explanations were never intended to cover all cases. Organisms suffer from brain damage, developmental defects, cancer, infectious diseases, misconceptions, malnutrition, and all manner of other problems that prevent them from having as many grandchildren as they otherwise might. However, these deviations from the rule do not indicate that adaptive explanations are vacuous, or that they are compatible with any outcome.
“Most of them certainly act as though they do. Kick someone in the testicles [...]”
Getting kicked in the testicles hurts. The explanation for why it hurts invokes selection pressures, but if you already know that it hurts, any general principles of evolutionary biology are screened off and irrelevant to explaining the organism’s behavior. Likewise the other things.
“Of course people don’t always profess to caring about their own fitness. Rather many profess to be altruists.”
This is a non-sequitur. Psychological selfishness is a distinct concept from the metaphorical genetic “selfishness” of, e.g., selfish genes. Someone who spends a lot of time caring for her sick child may be behaving in a way that is psychologically altruistic, but genetically “selfish.” Likewise, someone who refrains from having children because raising children is a burden may be psychologically selfish, but genetically “altruistic.”
“So don’t expect to be able to access your actual motives through introspection.”
These “actual motives” are epiphenominal. We can say that sugar tastes good, and bodily damage feels bad, and self-deception is easy, &c., and that there are evolutionary explanations for all of these things, without positing any mysterious, unobservable secret motives.
Although at this point I suspect we are just talking past each other …
Re: “Of course people don’t always profess to caring about their own fitness. Rather many profess to be altruists.” This is a non-sequitur.
Not really. Selfishness and altrusim here merely refer to whether you behave in the interests of your own genes, or whether you engage in self-sacrifice on behalf of others.
The effect I was discussing is illustrated well by my Bill Hamilton quotes:
...and...
http://alife.co.uk/essays/nietzscheanism/
After someone points this out, the incorrect response is to start adding clauses:
Or:
People are more likely to do this to something other than screwdrivers, obviously.
“The purpose of love is...”
″Eyebrows are there so that...”
It is easy to misinterpret the point of this post as claiming that the purpose assigned to an object is wrong or inadequate or hopelessly complex. That isn’t what is being said.
That statement sounds a little bit too strong to me. :-)
While we are, in the end, meat machines, we are adaptive meat machines, and one of the major advantages of intelligence is the ability to adapt to your environment—which is to say, doing more than executing preexisting adaptations but being able to generate new ones on the fly.
So while adaptation-execution is important, the very fact that we are capable of resisting adaptation-execution means that we are more than adaptation-executors. Indeed, most higher animals are capable of learning, and many are capable of at least basic problem solving.
There is pretty significant selective pressure towards being a fitness maximizer and not a mere adaptation-executor, because something which actively maximizes its fitness will by definition have higher fitness than one which does not.
So it’s better to view our taste buds as an adaptation fitted to ancestral conditions that included near-starvation and apples and roast rabbit,
And those apples were crab apples. I doubt that many of our distant ancestors would have experienced anything like our bred-for-sweetness fruit varieties on a regular basis. Those new fruit varieties are probably still very healthy – I’m just further highlighting the enormous gulf between what our ancestors ate and the concentrated sugar-fat-salt concoctions that we eat.
A link to Tooby and Cosmides’ pape cited in the intro; http://www.cep.ucsb.edu/papers/pfc92.pdf (Very long, but enlightening.)
I misread “organisms” as “organizations”.
And I feel like it actually does still apply somewhat, in the sense that the ideas passed down from team to team is the actual “DNA” whereas the behavior of the organization are determined by that but don’t direct feed back to that.