Evolutionary Psychology
Like “IRC chat” or “TCP/IP protocol”, the phrase “reproductive organ” is redundant. All organs are reproductive organs. Where do a bird’s wings come from? An Evolution-of-Birds Fairy who thinks that flying is really neat? The bird’s wings are there because they contributed to the bird’s ancestors’ reproduction. Likewise the bird’s heart, lungs, and genitals. At most we might find it worthwhile to distinguish between directly reproductive organs and indirectly reproductive organs.
This observation holds true also of the brain, the most complex organ system known to biology. Some brain organs are directly reproductive, like lust; others are indirectly reproductive, like anger.
Where does the human emotion of anger come from? An Evolution-of-Humans Fairy who thought that anger was a worthwhile feature? The neural circuitry of anger is a reproductive organ as surely as your liver. Anger exists in Homo sapiens because angry ancestors had more kids. There’s no other way it could have gotten there.
This historical fact about the origin of anger confuses all too many people. They say, “Wait, are you saying that when I’m angry, I’m subconsciously trying to have children? That’s not what I’m thinking after someone punches me in the nose.”
No. No. No. NO!
Individual organisms are best thought of as adaptation-executers, not fitness-maximizers. The cause of an adaptation, the shape of an adaptation, and the consequence of an adaptation, are all separate things. If you built a toaster, you wouldn’t expect the toaster to reshape itself when you tried to cram in a whole loaf of bread; yes, you intended it to make toast, but that intention is a fact about you, not a fact about the toaster. The toaster has no sense of its own purpose.
But a toaster is not an intention-bearing object. It is not a mind at all, so we are not tempted to attribute goals to it. If we see the toaster as purposed, we don’t think the toaster knows it, because we don’t think the toaster knows anything.
It’s like the old test of being asked to say the color of the letters in “blue”. It takes longer for subjects to name this color, because of the need to untangle the meaning of the letters and the color of the letters. You wouldn’t have similar trouble naming the color of the letters in “wind”.
But a human brain, in addition to being an artifact historically produced by evolution, is also a mind capable of bearing its own intentions, purposes, desires, goals, and plans. Both a bee and a human are designs, but only a human is a designer. The bee is “wind”, the human is “blue”.
Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors.
The most obvious kind of cognitive cause is deliberate, like an intention to go to the supermarket, or a plan for toasting toast. But an emotion also exists physically in the brain, as a train of neural impulses or a cloud of spreading hormones. Likewise an instinct, or a flash of visualization, or a fleetingly suppressed thought; if you could scan the brain in three dimensions and you understood the code, you would be able to see them.
Even subconscious cognitions exist physically in the brain. “Power tends to corrupt,” observed Lord Acton. Stalin may or may not have believed himself an altruist, working toward the greatest good for the greatest number. But it seems likely that, somewhere in Stalin’s brain, there were neural circuits that reinforced pleasurably the exercise of power, and neural circuits that detected anticipations of increases and decreases in power. If there were nothing in Stalin’s brain that correlated to power—no little light that went on for political command, and off for political weakness—then how could Stalin’s brain have known to be corrupted by power?
Evolutionary selection pressures are ontologically distinct from the biological artifacts they create. The evolutionary cause of a bird’s wings is millions of ancestor-birds who reproduced more often than other ancestor-birds, with statistical regularity owing to their possession of incrementally improved wings compared to their competitors. We compress this gargantuan historical-statistical macrofact by saying “evolution did it”.
Natural selection is ontologically distinct from creatures; evolution is not a little furry thing lurking in an undiscovered forest. Evolution is a causal, statistical regularity in the reproductive history of ancestors.
And this logic applies also to the brain. Evolution has made wings that flap, but do not understand flappiness. It has made legs that walk, but do not understand walkyness. Evolution has carved bones of calcium ions, but the bones themselves have no explicit concept of strength, let alone inclusive genetic fitness. And evolution designed brains themselves capable of designing; yet these brains had no more concept of evolution than a bird has of aerodynamics. Until the 20th century, not a single human brain explicitly represented the complex abstract concept of inclusive genetic fitness.
When we’re told that “The evolutionary purpose of anger is to increase inclusive genetic fitness,” there’s a tendency to slide to “The purpose of anger is reproduction” to “The cognitive purpose of anger is reproduction.” No! The statistical regularity of ancestral history isn’t in the brain, even subconsciously, any more than the designer’s intentions of toast are in a toaster!
Thinking that your built-in anger-circuitry embodies an explicit desire to reproduce, is like thinking your hand is an embodied mental desire to pick things up.
Your hand is not wholly cut off from your mental desires. In particular circumstances, you can control the flexing of your fingers by an act of will. If you bend down and pick up a penny, then this may represent an act of will; but it is not an act of will that made your hand grow in the first place.
One must distinguish a one-time event of particular anger (anger-1, anger-2, anger-3) from the underlying neural circuitry for anger. An anger-event is a cognitive cause, and an anger-event may have cognitive causes, but you didn’t will the anger-circuitry to be wired into the brain.
So you have to distinguish the event of anger, from the circuitry of anger, from the gene complex which laid down the neural template, from the ancestral macrofact which explains the gene complex’s presence.
If there were ever a discipline that genuinely demanded X-Treme Nitpicking, it is evolutionary psychology.
Consider, O my readers, this sordid and joyful tale: A man and a woman meet in a bar. The man is attracted to her clear complexion and firm breasts, which would have been fertility cues in the ancestral environment, but which in this case result from makeup and a bra. This does not bother the man; he just likes the way she looks. His clear-complexion-detecting neural circuitry does not know that its purpose is to detect fertility, any more than the atoms in his hand contain tiny little XML tags reading “<purpose>pick things up</purpose>”. The woman is attracted to his confident smile and firm manner, cues to high status, which in the ancestral environment would have signified the ability to provide resources for children. She plans to use birth control, but her confident-smile-detectors don’t know this any more than a toaster knows its designer intended it to make toast. She’s not concerned philosophically with the meaning of this rebellion, because her brain is a creationist and denies vehemently that evolution exists. He’s not concerned philosophically with the meaning of this rebellion, because he just wants to get laid. They go to a hotel, and undress. He puts on a condom, because he doesn’t want kids, just the dopamine-noradrenaline rush of sex, which reliably produced offspring 50,000 years ago when it was an invariant feature of the ancestral environment that condoms did not exist. They have sex, and shower, and go their separate ways. The main objective consequence is to keep the bar and the hotel and condom-manufacturer in business; which was not the cognitive purpose in their minds, and has virtually nothing to do with the key statistical regularities of reproduction 50,000 years ago which explain how they got the genes that built their brains that executed all this behavior.
To reason correctly about evolutionary psychology you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation.
- Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists by 24 Sep 2019 4:12 UTC; 299 points) (
- Thou Art Godshatter by 13 Nov 2007 19:38 UTC; 236 points) (
- Lost Purposes by 25 Nov 2007 9:01 UTC; 184 points) (
- The Hidden Complexity of Wishes by 24 Nov 2007 0:12 UTC; 176 points) (
- Value is Fragile by 29 Jan 2009 8:46 UTC; 170 points) (
- The Gift We Give To Tomorrow by 17 Jul 2008 6:07 UTC; 150 points) (
- Where Recursive Justification Hits Bottom by 8 Jul 2008 10:16 UTC; 123 points) (
- Interpersonal Entanglement by 20 Jan 2009 6:17 UTC; 103 points) (
- Your Price for Joining by 26 Mar 2009 7:16 UTC; 87 points) (
- Detached Lever Fallacy by 31 Jul 2008 18:57 UTC; 85 points) (
- Mind Projection Fallacy by 11 Mar 2008 0:29 UTC; 83 points) (
- The Moral Void by 30 Jun 2008 8:52 UTC; 78 points) (
- Stop Voting For Nincompoops by 2 Jan 2008 18:00 UTC; 76 points) (
- The Two-Party Swindle by 1 Jan 2008 8:38 UTC; 74 points) (
- Faster Than Science by 20 May 2008 0:19 UTC; 73 points) (
- Sympathetic Minds by 19 Jan 2009 9:31 UTC; 69 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- Savanna Poets by 18 Mar 2008 18:42 UTC; 69 points) (
- Dunbar’s Function by 31 Dec 2008 2:26 UTC; 68 points) (
- Why Does Power Corrupt? by 14 Oct 2008 0:23 UTC; 63 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 62 points) (
- The Psychological Unity of Humankind by 24 Jun 2008 7:12 UTC; 61 points) (
- Protein Reinforcement and DNA Consequentialism by 13 Nov 2007 1:34 UTC; 61 points) (
- The Robbers Cave Experiment by 10 Dec 2007 6:18 UTC; 59 points) (
- Efficient Cross-Domain Optimization by 28 Oct 2008 16:33 UTC; 54 points) (
- Collective Apathy and the Internet by 14 Apr 2009 0:02 UTC; 52 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- 24 Dec 2019 9:41 UTC; 46 points) 's comment on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk by (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Fake Fake Utility Functions by 6 Dec 2007 6:30 UTC; 42 points) (
- Rebelling Within Nature by 13 Jul 2008 12:32 UTC; 42 points) (
- Three Fallacies of Teleology by 25 Aug 2008 22:27 UTC; 38 points) (
- Cynicism in Ev-Psych (and Econ?) by 11 Feb 2009 15:06 UTC; 37 points) (
- Is Morality Given? by 6 Jul 2008 8:12 UTC; 36 points) (
- Rational vs. Scientific Ev-Psych by 4 Jan 2008 7:01 UTC; 35 points) (
- Is Morality Preference? by 5 Jul 2008 0:55 UTC; 34 points) (
- Ethical Inhibitions by 19 Oct 2008 20:44 UTC; 31 points) (
- Interpersonal Morality by 29 Jul 2008 18:01 UTC; 28 points) (
- Dreams of Friendliness by 31 Aug 2008 1:20 UTC; 28 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Emotional Involvement by 6 Jan 2009 22:23 UTC; 26 points) (
- Scope Insensitivity Judo by 19 Jul 2019 17:33 UTC; 22 points) (
- Principles of Disagreement by 2 Jun 2008 7:04 UTC; 20 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 22 May 2009 13:03 UTC; 17 points) 's comment on Least Signaling Activities? by (
- Against the standard narrative of human sexual evolution by 23 Jul 2010 5:28 UTC; 15 points) (
- 15 Dec 2009 14:42 UTC; 13 points) 's comment on Rebasing Ethics by (
- 1 Apr 2022 2:03 UTC; 12 points) 's comment on They Don’t Know About Second Booster by (
- Rationality Reading Group: Part L: The Simple Math of Evolution by 21 Oct 2015 21:50 UTC; 10 points) (
- [SEQ RERUN] Evolutionary Psychology by 24 Oct 2011 2:17 UTC; 10 points) (
- 11 Dec 2012 4:07 UTC; 10 points) 's comment on By Which It May Be Judged by (
- 26 Jan 2012 20:30 UTC; 7 points) 's comment on Politics is the Mind-Killer by (
- 28 Dec 2011 2:23 UTC; 6 points) 's comment on Welcome to Less Wrong! (2012) by (
- 19 Mar 2008 23:08 UTC; 6 points) 's comment on Savanna Poets by (
- 19 Sep 2010 8:07 UTC; 5 points) 's comment on Absolute denial for atheists by (
- 19 Jan 2011 4:45 UTC; 5 points) 's comment on Erroneous Visualizations by (
- 26 Nov 2011 15:35 UTC; 5 points) 's comment on Communicating rationality to the public: Julia Galef’s “The Straw Vulcan” by (
- 12 Jan 2020 19:31 UTC; 4 points) 's comment on Realism about rationality by (
- 26 Jul 2012 7:18 UTC; 3 points) 's comment on Evolutionary psychology as “the truth-killer” by (
- 5 Sep 2013 2:41 UTC; 3 points) 's comment on Open thread, September 2-8, 2013 by (
- 24 Aug 2012 15:06 UTC; 2 points) 's comment on Not for the Sake of Pleasure Alone by (
- 10 Apr 2010 2:13 UTC; 2 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 23 Jul 2009 2:23 UTC; 1 point) 's comment on Missing the Trees for the Forest by (
- 4 Jun 2017 18:50 UTC; 1 point) 's comment on A new, better way to read the Sequences by (
- 27 Feb 2012 1:07 UTC; 1 point) 's comment on Selfism and Partiality by (
- 21 Jul 2009 20:31 UTC; 1 point) 's comment on Sayeth the Girl by (
- 6 Mar 2008 17:55 UTC; 1 point) 's comment on Not for the Sake of Happiness (Alone) by (
- 16 Jun 2012 19:00 UTC; 0 points) 's comment on [Link] Can We Reverse The Stanford Prison Experiment? by (
- 12 Dec 2011 13:31 UTC; 0 points) 's comment on War and/or Peace (2/8) by (
- 5 Mar 2012 20:44 UTC; 0 points) 's comment on Theists are wrong; is theism? by (
- 10 Sep 2012 21:12 UTC; 0 points) 's comment on How to deal with someone in a LessWrong meeting being creepy by (
- 17 Jan 2011 10:34 UTC; 0 points) 's comment on Lotteries: A Waste of Hope by (
- Uncursing Civilization by 1 Jul 2024 18:44 UTC; -6 points) (
This level confusion also seems to show up whenever people talk about “free will”- a computer was programmed by us, but its code can still do things that we never designed it for. Evolution sure as heck never designed people to make condoms and birth control pills, so why can’t a computer do things we never designed it to do?
Are bugs free will?
If by “free will” we define any action that is not the intended behaviour of the original designer, then yes. And it actually does fit the bill relatively well, IMO—it is an emergent behaviour (usually) experienced during unexpected values appearing somewhere in the code. And just like with us, the behaviour is deterministic, and at the same time, pretty much impossible to predict in some cases :D
Multi-threading issues are a nice example—everything works very well in isolation, and breaks down in a real production enviroment.
The part about the ontological distinctiveness between cognitive and evolutionary causes reminds me of my old English professor who mixed the two. While I knew it was wrong, I didn’t have a label. He believed that nature had a kind of memory through natural selection.
You say “the neural circuitry of anger is a reproductive organ as surely as your liver” and “the evolutionary purpose of anger is to increase inclusive genetic fitness.”
I don’t believe you have enough evidence to assert these statements. All you know is that “angry ancestors had more kids” but you DON’T know that it’s as a result of the anger. It could have happened that, say, the same ancestors that could run faster also happened to have the capacity for anger. As a result of their faster running, they reproduced/survived, and so did anger.
I liken this to classic studies on the effects of divorce on children. Of course, kids end up worse off with parents that divorce, but all else equal, divorce may very well be GOOD for the kid. Similarly, although here angry ancestors did have more kids, anger may very well be BAD for reproduction/survival. I’m sure there’s also a good cynical example, too, like that the reason the dollar was the dominant currency through the 20th century was because it was green.
It’s possible that anger was a byproduct of something else which is adaptive (certainly such evolutionary byproducts exist)… but it seems pretty unlikely in this case. Anger is a rather complicated thing that seems to have its own modular brain systems; it doesn’t seem to be a byproduct of anything else.
The possibility of an “adaptation” being in fact an exaptatation or even a spandrel is yet another reason to be incredibly careful about purposing teleology into a discussion about evolutionarily-derived mechanisms.
Yes dude! That’s showing rigor. And PnrJulius’ comment that Boris’ comment “seems unlikely” is precisely the soft-serve sludge that rigorous thinkers like Boris here have to slog against day in and day out. Boo Julius, boo. Yay Boris, yay.
And then a roar of the crowd for TechnoGuyRob who takes the long pass from Boris and dunks on Julius in a way J’s grandbabies gonna feel when he writes “The possibility of an “adaptation” being in fact an exaptatation or even a spandrel is yet another reason to be incredibly careful about purposing teleology into a discussion about evolutionarily-derived mechanisms.”
It’s problematic how stoked this exchange makes me. I’ma say it will not prove adaptive.
Oh, tsk tsk. But women with “creationist” brains just don’t have the sort of one night stands implied by your story, at least not as often as the ones with “evolutionary” brains, :-).
[citation needed]
OTOH, if they are creationists who have been reading too much Stephen Jay Gould, who knows what sorts of trouble they might get into. They might even tragically start selecting partners on multi-levels, while disobeying the correct equations, :-).
My name is Tiiba, and I approve of this post.
That said, I have a question. Your homepage says:
“Most of my old writing is horrifically obsolete. Essentially you should assume that anything from 2001 or earlier was written by a different person who also happens to be named “Eliezer Yudkowsky”. 2002-2003 is an iffy call.”
Well, as far as I can tell, most of your important writing on AI is “old”. So what does this mean? What ideas have been invalidated? What replaced them? Are you secretly building a robot?
I have information from the future!
EY says it best in The Sheer Folly of Callow Youth, but essentially EY once thought, “If there is truly such a thing as moral value, then a superintelligence will likely discover what the correct moral values are. If there is no such thing as moral value, then the current reality is no more valuable than the reality where I make an AI that kills everyone. Therefore, I should strive to make an AI regardless of ethical problems.”
Then in the early 2000s he had an epiphany. The mechanics of his objection had to do with disproving the first part of the argument, that a superintelligence would automatically do the ‘right’ thing in a universe with ethics. This is because you could build an AI ‘foo’ which was a superintelligence, and an AI ‘bar’ which was ‘foo’ except with a little gnome who sat at the very beginning of the decision algorithm and changed all of the goals from “maximize value” to “minimize value”. This proves that it is possible for two superintelligences to do two completely different things, therefore an AI must be a Friendly AI in order to do the ‘right’ thing. This is when he realized how close he had come to perhaps causing an extinction event, and realized how important the FAI project was. (It was also when he coined the term FAI to begin with.)
Tiiba, please re-ask on this month’s Open Thread and I’ll delete the comment here.
Tom McGabe: “Evolution sure as heck never designed people to make condoms and birth control pills, so why can’t a computer do things we never designed it to do?”
That’s merely unpredictability/non-determinism, which is not necessarily the same as free will.
“That’s merely unpredictability/non-determinism, which is not necessarily the same as free will.”
Prove it; at least give a reasonable definition of free will that doesn’t include “unpredictability/non-determinism”. For that matter, how about a definition of “unpredictability/non-determinism”.
Free will and, usually, non-determinism are among the big ideas everyone talks about without having any idea what they’re talking about.
Sure, but after a while this just becomes a habit and I don’t think it’s more difficult than, say, organic chemistry. But without some practice or exposure, it is deeply counterintuitive. It’s also probably encroaching on some sacred territory. You can subject some atrocious things like infanticide and homicidal rampages to evolutionary explanations. I don’t think anyone’s closed the book on any of these, but in all these cases I think EP has an interesting perspective. Generally, though, people don’t even want to think about it. People probably resist thinking along these lines because of the perceived violation of their freedom or morality (which violation is, as you say, is an illusion).
I define free will as an ability to create and implement plans that move the world toward a goal. That seems to fit the way the term is used with regard to humans.
You can do something willfully (construct and implement a plan with a goal in mind), be coerced (construct and implement a plan that achieves something other than your own goals), be restrained (construct a plan that could achieve a goal, and then not implement it), or be manipulated (implement a plan that you did not construct, and whose goal you might not understand; you might or might not like the result).
Most programs implement plans that they did not create to achieve goals they don’t understand in a world of which they don’t know. But I think that if a machine can create and carry out plans, it has a degree of freedom.
billswift said: “Prove it.”
I am just saying ‘being unpredictable’ isn’t the same as free will, which I think is pretty intuitive (most complex systems are unpredictable, but presumably very few people will grant them all free will). As far as the relationship between randomness and free will, that’s clearly a large discussion with a large literature, but again it’s not clear what the relationship is, and there is room for a lot of strange explanations. For example some panpsychists might argue that ‘free will’ is the primitive notion, and randomness is just an effect, not the other way around.
“You can subject some atrocious things like infanticide and homicidal rampages to evolutionary explanations. I don’t think anyone’s closed the book on any of these, but in all these cases I think EP has an interesting perspective.”
Huh? We know that group selection can lead to cannibalism, so analyzing infanticide and homicidal rampages would seem rather trivial. On the other hand, it’s rather surprising that purely evolutionary mechanisms would lead to something as complex as our psychology and sense of morality. From a rational Bayesian perspective, how likely is it that an evolved adaptation executor would be able to formulate optimization criteria, while lacking an intuitive understanding of inclusive fitness?
Right, this is a bit of a problem. Why do we have these complicated brains that work toward their own goals? This seems counter-productive to the goal of maximizing fitness by executing adaptations… but maybe it has other advantages we’ve not yet understood.
Is the ability to plan really so special?
When an animal goes out of its nest, forages for food and then returns, isn’t that the same planning we exhibit too? And now add that humans are omnivorous and acted both as pack hunters and as gatherers; suddenly, complexity arises, that requires you to be able to plan not only for yourself, but also as part of your group—these 10 guys will go hunt that mammoth, while these 5 will go gather berries and these 5 will make some new spears. Simply through the requirement for group interaction, you have another mechanism for the development of plans, psychology, morality.
And that’s kind of the point, isn’t it? Who says our psychology and morality is a direct product of biological evolution? Biological evolution only gave us the tools (a brain capable of forming plans); morality is a social behaviour that evolved alongside, led by intelligent designers (us) - with groups dividing on various issues, some of them surviving, some not; some of them spreading their ideas further, some not. We have long since taken over the reins of our development, even though we still move within certain constraints imposed on us, with various flexibility (eg. the ability to suppress our anger).
I think this is quite apparent when you look at animals bred in isolation or in different conditions; sure, there’s some behaviour based on genetics, but it obviously isn’t everything.
What are you saying—that EP has closed the book on them?
My point about infanticide etc. was that EP has bigger problems for becoming generally accepted than how difficult it is to reason about—problems having to do with a perceived removal of agency from human beings.
Anyway, it doesn’t strike me as surprising that purely evolutionary mechanisms led to our psychology, and especially not our sense of morality. Are these things much more complex than any other animal behavior we’re happily willing to concede to evolution?
The neural circuitry of anger is a reproductive organ as surely as your liver. Anger exists in Homo sapiens because angry ancestors had more kids. thanks for this healpful information.
it wasn’t evolution that did it. it was my daddy peace dudes, seeyas all soon chris-dawg taken from the right (hand-side)
Another part of the evolutionary psychology puzzle is the distinction of levels, or what Yudkowsky calls the hands vs fingers.
Its not that anger causes reproduction. Its the anger circuitry, or the ability to be angry, that causes better reproductive success. The former statement doesn’t even make sense in terms of evolutionary psychology, while the latter is fairly obvious (namely, that lack of anger leads to not fighting from a reproductively hopeless situation. Anger circuitry forces action from reproductive dead-ends)
“Anger exists in Homo sapiens because angry ancestors had more kids. There’s no other way it could have gotten there.”
This is not entirely true—as Boris seems to have noticed. More generally; anything that purely helps survival is certainly more probable to propagate through a species. However, there are other traits that might propagate, such as any of those that are either: a) Not useful nor a burden b) A negative biproduct of something useful, without outweighing the useful
Is it relevant that humanity doesn’t have competent competition?
I wonder how we’d be doing if we were up against coyotes with thumbs.
Um...Neanderthals had thumbs, and fairly large brains. We pretty much wiped them out. If they weren’t “competent competition”, I’m not sure what you’d call “competent” (unless it would have been some species that wiped us out, who would be here having the exact conversation, or something so delicately balanced that I doubt would ever happen).
I thought they were subsumed into the European branch of Cro-Magnon (us).
Controversial: http://en.wikipedia.org/wiki/Neanderthal_admixture_theory—but in any case, 1%-4% of the genome? That’s close enough to extinction...if coyotes interbred with dogs, and lots of household dogs had 1%-4% coyote DNA in them, but there would be no coyotes in the wild, I’d treat it as “extinct enough for me.” :)
We haven’t wiped out coyotes, so they might be more competent competitors (even without thumbs) than Neanderthals.
First of all; I don’t see any apes or monkeys competing with us presently. Also, we are an evolved species. There have certainly been competitors along the way—perhaps said monkeys or apes and most certainly neanderthals as moshez mentioned. We’ve won though; that is hardly arguable.
Other simians compete with us for territory, but kind of like a team of quadriplegic children would compete in the World Cup, so it’s not immediately clear that it counts as competing.
I don’t understand the point of this post. I mean, I understand its points, but why is this post here? Is it trying to point out that: (a) intent and reality are not always—and usually aren’t—entangled? (b) Reality happened and our little XML-style purpose tags are added post fact?
It seems odd to spend so much time saying, “Humans reproduced successfully. Anger exists in humans.” If the anger part is correlated to the reproduction part it seems fair to ask, “Why did anger help reproduction?” This is a different question than, “What is the purpose of anger?” Is this difference what the article was pointing out?
How is this different from any other topic?
The idea of special-casing evolutionary psychology is where I feel I am losing the plot.
It would be odd if people didn’t get confused about this excessively.
Evolutionionary psychology is related to the study of cognitive biases, so being able to reason about it well is important. It is also easily observable that people make the mistakes this post warns against.
When discussing goal systems and terminal values, people with a confused view of evolutionary psychology tend to suggest that we should try to maximize inclusive genetic fitness, and this post discusses the confusion which leads to that common mistake.
And Eliezer has also drawn examples from computer science, I don’t think is favoring evolutionary psychology. It is not surprising that some posts focus on a subtopic or rationality a specific domain of its application.
The part where this gets difficult is understanding why we evolved to have conscious intentions in the first place. What purpose does it serve to make us actually want things, rather than simply act as if we wanted them? Why aren’t we like toasters?
This also gets at one of the reasons why I think it’s a fool’s errand to try to make the singularity with a non-sentient AI. If it were possible to make that level of intelligence without consciousness (and do so efficiently and cheaply), surely natural selection would have done so? Instead it made us sentient; this suggests that sentience is a useful thing to have.
For certain values of “act as if”, what you’re asking is why we aren’t p-zombies.
Yeah, and that’s an interesting question.
I don’t think that aged well :)
I found a related article prior to this on this topic which seems to be expanding about the same thing.
https://journals.sagepub.com/doi/10.1177/1745691610393528