Rebelling Within Nature
Followup to: Fundamental Doubts, Where Recursive Justification Hits Bottom, No Universally Compelling Arguments, Joy in the Merely Real, Evolutionary Psychology
“Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it.”
—T. H. Huxley (“Darwin’s bulldog”, early advocate of evolutionary theory)
There is a quote from some Zen Master or other, who said something along the lines of:
“Western man believes that he is rebelling against nature, but he does not realize that, in doing so, he is acting according to nature.”
The Reductionist Masters of the West, strong in their own Art, are not so foolish; they do realize that they always act within Nature.
You can narrow your focus and rebel against a facet of existing Nature—polio, say—but in so doing, you act within the whole of Nature. The syringe that carries the polio vaccine is forged of atoms; our minds, that understood the method, embodied in neurons. If Jonas Salk had to fight laziness, he fought something that evolution instilled in him—a reluctance to work that conserves energy. And he fought it with other emotions that natural selection also inscribed in him: feelings of friendship that he extended to humanity, heroism to protect his tribe, maybe an explicit desire for fame that he never acknowledged to himself—who knows? (I haven’t actually read a biography of Salk.)
The point is, you can’t fight Nature from beyond Nature, only from within it. There is no acausal fulcrum on which to stand outside reality and move it. There is no ghost of perfect emptiness by which you can judge your brain from outside your brain. You can fight the cosmic process, but only by recruiting other abilities that evolution originally gave to you.
And if you fight one emotion within yourself—looking upon your own nature, and judging yourself less than you think should be—saying perhaps, “I should not want to kill my enemies”—then you make that judgment, by...
How exactly does one go about rebelling against one’s own goal system?
From within it, naturally.
This is perhaps the primary thing that I didn’t quite understand as a teenager.
At the age of fifteen (fourteen?), I picked up a copy of TIME magazine and read an article on evolutionary psychology. It seemed like one of the most massively obvious-in-retrospect ideas I’d ever heard. I went on to read The Moral Animal by Robert Wright. And later The Adapted Mind—but from the perspective of personal epiphanies, The Moral Animal pretty much did the job.
I’m reasonably sure that if I had not known the basics of evolutionary psychology from my teenage years, I would not currently exist as the Eliezer Yudkowsky you know.
Indeed, let me drop back a bit further:
At the age of… I think it was nine… I discovered the truth about sex by looking it up in my parents’ home copy of the Encyclopedia Britannica (stop that laughing). Shortly after, I learned a good deal more by discovering where my parents had hidden the secret 15th volume of my long-beloved Childcraft series. I’d been avidly reading the first 14 volumes—some of them, anyway—since the age of five. But the 15th volume wasn’t meant for me—it was the “Guide for Parents”.
The 15th volume of Childcraft described the life cycle of children. It described the horrible confusion of the teenage years—teenagers experimenting with alcohol, with drugs, with unsafe sex, with reckless driving, the hormones taking over their minds, the overwhelming importance of peer pressure, the tearful accusations of “You don’t love me!” and “I hate you!”
I took one look at that description, at the tender age of nine, and said to myself in quiet revulsion, I’m not going to do that.
And I didn’t.
My teenage years were not untroubled. But I didn’t do any of the things that the Guide to Parents warned me against. I didn’t drink, drive, drug, lose control to hormones, pay any attention to peer pressure, or ever once think that my parents didn’t love me.
In a safer world, I would have wished for my parents to have hidden that book better.
But in this world, which needs me as I am, I don’t regret finding it.
I still rebelled, of course. I rebelled against the rebellious nature the Guide to Parents described to me. That was part of how I defined my identity in my teenage years—”I’m not doing the standard stupid stuff.” Some of the time, this just meant that I invented amazing new stupidity, but in fact that was a major improvement.
Years later, The Moral Animal made suddenly obvious the why of all that disastrous behavior I’d been warned against. Not that Robert Wright pointed any of this out explicitly, but it was obvious given the elementary concept of evolutionary psychology:
Physiologically adult humans are not meant to spend an additional 10 years in a school system; their brains map that onto “I have been assigned low tribal status”. And so, of course, they plot rebellion—accuse the existing tribal overlords of corruption—plot perhaps to split off their own little tribe in the savanna, not realizing that this is impossible in the Modern World. The teenage males map their own fathers onto the role of “tribal chief”...
Echoes in time, thousands of repeated generations in the savanna carving the pattern, ancient repetitions of form, reproduced in the present in strange twisted mappings, across genes that didn’t know anything had changed...
The world grew older, of a sudden.
And I’m not going to go into the evolutionary psychology of “teenagers” in detail, not now, because that would deserve its own post.
But when I read The Moral Animal, the world suddenly acquired causal depth. Human emotions existed for reasons, they weren’t just unexamined givens. I might previously have questioned whether an emotion was appropriate to its circumstance—whether it made sense to hate your parents, if they did really love you—but I wouldn’t have thought, before then, to judge the existence of hatred as an evolved emotion.
And then, having come so far, and having avoided with instinctive ease all the classic errors that evolutionary psychologists are traditionally warned against—I was never once tempted to confuse evolutionary causation with psychological causation—I went wrong at the last turn.
The echo in time that was teenage psychology was obviously wrong and stupid—a distortion in the way things should be—so clearly you were supposed to unwind past it, compensate in the opposite direction or disable the feeling, to arrive at the correct answer.
It’s hard for me to remember exactly what I was thinking in this era, but I think I tended to focus on one facet of human psychology at any given moment, trying to unwind myself a piece at a time. IIRC I did think, in full generality, “Evolution is bad; the effect it has on psychology is bad.” (Like it had some kind of “effect” that could be isolated!) But somehow, I managed not to get to “Evolutionary psychology is the cause of altruism; altruism is bad.”
It was easy for me to see all sorts of warped altruism as having been warped by evolution.
People who wanted to trust themselves with power, for the good of their tribe—that had an obvious evolutionary explanation; it was, therefore, a distortion to be corrected.
People who wanted to be altruistic in ways their friends would approve of—obvious evolutionary explanation; therefore a distortion to be corrected.
People who wanted to be altruistic in a way that would optimize their fame and repute—obvious evolutionary distortion to be corrected.
People who wanted to help only their family, or only their nation—acting out ancient selection pressures on the savanna; move past it.
But the fundamental will to help people?
Well, the notion of that being merely evolved, was something that, somehow, I managed to never quite accept. Even though, in retrospect, the causality is just as obvious as teen revolutionism.
IIRC, I did think something along the lines of: “Once you unwind past evolution, then the true morality isn’t likely to contain a clause saying, ‘This person matters but this person doesn’t’, so everyone should matter equally, so you should be as eager to help others as help yourself.” And so I thought that even if the emotion of altruism had merely evolved, it was a right emotion, and I should keep it.
But why think that people mattered at all, if you were trying to unwind past all evolutionary psychology? Why think that it was better for people to be happy than sad, rather than the converse?
If I recall correctly, I did ask myself that, and sort of waved my hands mentally and said, “It just seems like one of the best guesses—I mean, I don’t know that people are valuable, but I can’t think of what else could be.”
This is the Avoiding Your Belief’s Real Weak Points / Not Spontaneously Thinking About Your Belief’s Most Painful Weaknesses antipattern in full glory: Get just far enough to place yourself on the first fringes of real distress, and then stop thinking.
And also the antipattern of trying to unwind past everything that is causally responsible for your existence as a mind, to arrive at a perfectly reliable ghost of perfect emptiness.
Later, having also seen others making similar mistakes, it seems to me that the general problem is an illusion of mind-independence that comes from picking something that appeals to you, while still seeming philosophically simple.
As if the appeal to you, of the moral argument, weren’t still a feature of your particular point in mind design space.
As if there weren’t still an ordinary and explicable causal history behind the appeal, and your selection of that particular principle.
As if, by making things philosophically simpler-seeming, you could enhance their appeal to a ghost-in-the-machine who would hear your justifications starting from scratch, as fairness demands.
As if your very sense of simplicity were not an aesthetic sense inscribed in you by evolution.
As if your very intuitions of “moral argument” and “justification”, were not an architecture-of-reasoning inscribed in you by natural selection, and just as causally explicable as any other feature of human psychology...
You can’t throw away evolution, and end up with a perfectly moral creature that humans would have been, if only we had never evolved; that’s really not how it works.
Why accept intuitively appealing arguments about the nature of morality, rather than intuitively unappealing ones, if you’re going to distrust everything in you that ever evolved?
Then what is right? What should we do, having been inscribed by a blind mad idiot god whose incarnation-into-reality takes the form of millions of years of ancestral murder and war?
But even this question—every fragment of it—the notion that a blind mad idiocy is an ugly property for a god to have, or that murder is a poisoned well of order, even the words “right” and “should”—all a phenomenon within nature. All traceable back to debates built around arguments appealing to intuitions that evolved in me.
You can’t jump out of the system. You really can’t. Even wanting to jump out of the system—the sense that something isn’t justified “just because it evolved”—is something that you feel from within the system. Anything you might try to use to jump—any sense of what morality should be like, if you could unwind past evolution—is also there as a causal result of evolution.
Not everything we think about morality is directly inscribed by evolution, of course. We have values that we got from our parents teaching them to us as we grew up; after it won out in a civilizational debate conducted with reference to other moral principles; that were themselves argued into existence by appealing to built-in emotions; using an architecture-of-interpersonal-moral-argument that evolution burped into existence.
It all goes back to evolution. This doesn’t just include things like instinctive concepts of fairness, or empathy, it includes the whole notion of arguing morals as if they were propositional beliefs. Evolution created within you that frame of reference within which you can formulate the concept of moral questioning. Including questioning evolution’s fitness to create our moral frame of reference. If you really try to unwind outside the system, you’ll unwind your unwinders.
That’s what I didn’t quite get, those years ago.
I do plan to dissolve the cognitive confusion that makes words like “right” and “should” seem difficult to grasp. I’ve been working up to that for a while now.
But I’m not there yet, and so, for now, I’m going to jump ahead and peek at an answer I’ll only later be able to justify as moral philosophy:
Embrace reflection. You can’t unwind to emptiness, but you can bootstrap from a starting point.
Go on morally questioning the existence (and not just appropriateness) of emotions. But don’t treat the mere fact of their having evolved as a reason to reject them. Yes, I know that “X evolved” doesn’t seem like a good justification for having an emotion; but don’t let that be a reason to reject X, any more than it’s a reason to accept it. Hence the post on the Genetic Fallacy: causation is conceptually distinct from justification. If you try to apply the Genetic Accusation to automatically convict and expel your genes, you’re going to run into foundational trouble—so don’t!
Just ask if the emotion is justified—don’t treat its evolutionary cause as proof of mere distortion. Use your current mind to examine the emotion’s pluses and minuses, without being ashamed; use your full strength of morality.
Judge emotions as emotions, not as evolutionary relics. When you say, “motherly love outcompeted its alternative alleles because it protected children that could carry the allele for motherly love”, this is only a cause, not a sum of all moral arguments. The evolutionary psychology may grant you helpful insight into the pattern and process of motherly love, but it neither justifies the emotion as natural, nor convicts it as coming from an unworthy source. You don’t make the Genetic Accusation either way. You just, y’know, think about motherly love, and ask yourself if it seems like a good thing or not; considering its effects, not its source.
You tot up the balance of moral justifications, using your current mind—without worrying about the fact that the entire debate takes place within an evolved framework.
That’s the moral normality to which my yet-to-be-revealed moral philosophy will add up.
And if, in the meanwhile, it seems to you like I’ve just proved that there is no morality… well, I haven’t proved any such thing. But, meanwhile, just ask yourself if you might want to help people even if there were no morality. If you find that the answer is yes, then you will later discover that you discovered morality.
Part of The Metaethics Sequence
Next post: “Probability is Subjectively Objective”
Previous post: “Fundamental Doubts”
- Detached Lever Fallacy by 31 Jul 2008 18:57 UTC; 85 points) (
- Could Anything Be Right? by 18 Jul 2008 7:19 UTC; 73 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- Against Maturity by 18 Feb 2009 23:34 UTC; 62 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- [Intro to brain-like-AGI safety] 12. Two paths forward: “Controlled AGI” and “Social-instinct AGI” by 20 Apr 2022 12:58 UTC; 44 points) (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Fundamental Doubts by 12 Jul 2008 5:21 UTC; 38 points) (
- Escaping Your Past by 22 Apr 2009 21:15 UTC; 28 points) (
- Does Your Morality Care What You Think? by 26 Jul 2008 0:25 UTC; 21 points) (
- 17 Jan 2010 20:50 UTC; 7 points) 's comment on The Wannabe Rational by (
- 22 Mar 2010 4:00 UTC; 7 points) 's comment on The scourge of perverse-mindedness by (
- [SEQ RERUN] Rebelling Within Nature by 1 Jul 2012 4:27 UTC; 6 points) (
- 10 Jan 2011 21:05 UTC; 6 points) 's comment on Deontological Decision Theory and The Solution to Morality by (
- 9 Mar 2012 16:12 UTC; 5 points) 's comment on How does real world expected utility maximization work? by (
- 13 Jun 2013 19:17 UTC; 4 points) 's comment on Effective Altruism Through Advertising Vegetarianism? by (
- 17 Jan 2010 7:20 UTC; 3 points) 's comment on Savulescu: “Genetically enhance humanity or face extinction” by (
- 16 Nov 2011 23:40 UTC; 3 points) 's comment on Stanovich, ‘The Robot’s Rebellion’ (mini-review) by (
- 5 Sep 2013 2:41 UTC; 3 points) 's comment on Open thread, September 2-8, 2013 by (
- 12 Jan 2010 23:47 UTC; 3 points) 's comment on High Status and Stupidity: Why? by (
- 14 Apr 2012 3:43 UTC; 1 point) 's comment on Disguised Queries by (
- 22 Oct 2009 21:33 UTC; 0 points) 's comment on Rationality Quotes: October 2009 by (
- 26 Nov 2011 22:22 UTC; 0 points) 's comment on How should I help us achieve immortality? by (
- My critique of Eliezer’s deeply irrational beliefs by 16 Nov 2023 0:34 UTC; -33 points) (
Since the human brain is not capable of recursive alteration of it’s source code, and remains almost identical to the first conscious brains evolved 100,000 years ago, one must wonder if it is a tool capable of (or appropriate for) designing a friendly AI. In a time when the parabolic rate of increase in information far exceeds any possibility for natural selection to produce brains that do not rely on the evolved emotions and motivations you discuss, how can such a brain be expected to program the AI source code appropriately, when that brain is not capable of doing the same for itself? That is, how can that brain be expected to be capable of choosing what actually is “friendly”, in light of its evolved state?
The human brain isn’t even appropriate for arithmetic. If we can manage making AI with it, it’s nothing short of a miracle. But we might be able to do it, just like you could probably unscrew something with a hammer. It’s entirely the wrong tool for the job, but if it’s the only tool you have, perhaps you can make it work.
I don’t see how this is relevant to the article.
Where are you getting not capable?
I see this as a continuation of the same theme: a kind of “frame of reference” issue.
For example, I suspect that time doesn’t exist when you look at the universe from the most broad perspective. Instead, you have this kind of platonia on which time is just a relation between different points across one of its dimensions. But that doesn’t mean that time doesn’t exist within my personal frame of reference. I’m here experiencing time right now. Similarly, I know that my hand is mostly empty space, from a universal point of view, but that doesn’t mean that it makes sense of me to relate to my hand as being empty space. In my frame of reference it’s quite solid. Same for freewill: I understand that from the universal perspective it doesn’t exist in some sense, but for me in my frame of reference it does. “I” am “free” to do what “I” decide to do. Viewed correctly there is no contradiction, just as there is no contradiction between the fact that my hand is “empty space” and yet quite solid.
Here again we have the same thing, but with morality. If we zoom out to the universal scale perhaps there is no morality. However, the universal scale is not where I am. Shooting my mother is still wrong according to my values and principles, just like how I have freewill, time exists and my hand is solid. My desire to preserve my mother’s life may well have an evolutionary explanation, however that doesn’t in any way invalidate my desire, or give me any reason to discard it, or even want to discard it.
Why does that not surprise me?
You are taking on the tone of a self-help guru.
I get wary when I hear someone state what I should do or how I should view the world. Do you not find it presumptuous to dictate what is best for us?
Once you unwind past evolution and true morality isn’t likely to contain [...]
I think either a word has been missed out here, or and should be then.
If I recall correctly, I did ask myself that, and sort of waved my hands mentally and said, “It just seems like one of the best guesses—I mean, I don’t know that people are valuable, but I can’t think of what else could be.”
I find this fairly ominous, since that handwaved belief happens to be my current belief: that conscious states are the only things of (intrinsic) value: since only those conscious states can contain affirmations or denials that whatever they’re experiencing has value.
Zen Buddhist students are sometimes told to wash their mouths out with soap when they say the word ‘Buddha’. I suggest Eliezer does the same thing—with the words ‘Master’, ‘Way’, and ‘Zen’.
On a final note, it is absurd to speak about ‘rebelling against your goal structure’. Deeper goals and preferences can result in the creation and destruction of shallower ones—that’s all.
“In a safer world, I would have wished for my parents to have hidden that book better.”
Why?
He means that in the counterfactual world where he didn’t find this book, he became normal. In that case, he would have wished that his parents had not let him read this book (which is precisely what would have indeed happened).
I remember first having this revelation as something along the lines of: “You know when you’re in love or overcome by anger, and you do stupid things, and afterward you wonder what the hell you were thinking? Well, your ‘normal’ emotional states are just like that, except you never get that moment of reflection to wonder what the hell you were thinking.” I tried to resolve it with the kind of reflective deliberation that I think you’re prescribing here. Later I adopted a sort of happy fatalism: We’re trapped inside our own psychology and that’s fine!
Not long after, I read the obscurantist French philosopher Alain Badiou (who I do not recommend!), and was inspired by his account of truth. Badiou takes truth to be “fidelity to the event.” We are witness to a transformative event and take it upon ourselves to alter the world in its name. What I realized was (and not to disappoint my fans) the only thing that can interrupt business-as-usual for us is science. Science is the only thing truly alien to us; it’s the only thing that can rupture the fatalistic clockwork playing-out of our psychology on our environment. The potential of science lies in its ability to transform us. So I adopted a sort of utilitarianism where the goal is to maximize the amount of science being done and maximize the degree to which it transforms our lives.
That’s enough morality for me.
“Physiologically adult humans are not meant to spend an additional 10 years in a school system; their brains map that onto “I have been assigned low tribal status”. And so, of course, they plot rebellion”
Of course
“—accuse the existing tribal overlords of corruption—plot perhaps to split off their own little tribe in the savanna, not realizing that this is impossible in the Modern World. ”
It was possible pretty recently. Sounds like a good description of the rural side of the 60s counter-culture. Wouldn’t be too far from truth to say that they did split off their own little tribes in the VAST and EMPTY savanna that’s still around, and then got crushed by a mix of the larger tribe that they had been part of, their own schismatic tendencies and bad theories of tribal organization, and other tougher ‘outsider’ tribes such as outlaw biker gangs.
actually, that’s news to me. It sounds convincing, and is quite sad.
Biker gangs really are fascinating. It seems impossible for a ‘good’ or hygiene conscious to penetrate unless you have a substantial organisation behind you (like a police or intelligence force to coordinate fake hits and such).
Eliezer,
You attribute a lot of things to genetic evolution, and nothing to memetic (cultural) evolution. What is the reasoning behind disregarding memes? Is there an argument that none of our emotions, and others things discussed, are memetic?
Memes are genetic—according to any sensible information-theoretic definition of what constitutes a gene. They are a type of gene that is not made of DNA.
I experienced a similar revulsion toward the teenaged as a youth. It might be a nerd thing.
Eliezer seems to be saying something like “These moral intuitions are valid to me because I have them, regardless of why”. It seems to me that basis leaves him no room to engage in moral reasoning and say that any of Jonathan Haidt’s five moral foundations (harm and fairness, I am guessing) are more valid or trump any others (loyalty, authority, purity). He says the source does not disqualify or justify any moral principle, but then what DOES disqualify or justify such things? To me the simple answer is nothing.
Tim Tyler,
Genes and memes are both things on which evolution acts (replicators), but they also have important differences so it’s useful to use different words. In particular, the logic of what sort of behaviors would evolve in people is different if you consider memes or genes. The available replication strategies are different if for genes (which require sex and parenting) and memes (which require older people to communicate to younger people).
Whether something is genetic or memetic is also highly relevant to A) how (by what mechanism) it might influence people’s behavior B) how difficulty it is for someone to change that trait.
Allan, thanks, fixed.
WTF: I get wary when I hear someone state what I should do or how I should view the world.
Wow, and yet you learned how to read and write without anyone ever teaching you? You must be an amazing genius. Either that, or people have been telling you what to do your entire life, but you don’t notice until they hit you over the head with it.
Caledonian: Deeper goals and preferences can result in the creation and destruction of shallower ones—that’s all.
There’s no hierarchical ordering of emotions. They are neither deep nor shallow, they are simply there.
Elliot: What is the reasoning behind disregarding memes?
I don’t think that we have any memetic emotions—I could be wrong but it’s a scary thought. Memes exist in a framework determined by evolved brains; see Tooby and Cosmides’s “The Psychological Foundations of Culture”. I had thought I discussed this in the course of tracing back morals through arguments that appealed to built-in emotions.
‘Emotions’ are a qualitatively and physiologically distinct set of cognitive algorithms that can be felt particularly strongly from the inside (because they have large effects on muscle tension and homeostasis); but we can definitely build strong qualia for the subjective experience of other cognitive algorithms, especially when they draw on emotions or parts of emotions as subcomponents of themselves. Briefly querying my brain I can’t think of an obvious clear example, but there are contenders, and because human mindspace is big I don’t doubt that some people have memetic emotions (that is, cognitive algorithms with strong qualia that have physiological/homeostatic correlates, or are particularly strong despite the lack of them, that are not evolutionarily programmed but are programmed via powerful memetic transmission).
There are also ‘genetic’ emotions you might never have experienced but for memes. (The jhanas from vipassana meditation come to mind.)
I’m not sure what you mean by a memetic emotion, but as I understand the phrase, they’re quite common. They’re a lot of why people go to sporting events and concerts—they want to be caught up in a group emotion.
I think that EY is claiming that there are only so many hormones and neurotransmitters, and that they are all “built in” by evolution. You seem to be claiming that we (memetically) learn to trigger these emotions using novel stimuli.
But as to EY’s claim: Is Viagra a memetic emotion? Cocaine? Zoloft? Ethanol? Sniffed glue?
IMHO, it’s bad form to use the term “gene” as shorthand for “nucleic replicator”.
Gene is, or should be a general biological term, as should genetics. Tying these concepts to one genetic medium—a-la molecular biology—would be short sighted, and would make it unnecessarily difficult to discuss pre-nucleic genetics.
So by all means distingish between cellular and cultural inheritance, but please don’t do so by calling the former “genes” and the latter “memes”—and claiming that the concepts are mutually exclusive.
It’s also possible that people aren’t monolithic personalities and that it is literally accurate to describe them as possessing multiple (and frequently conflicting) emotions.
Eliezer,
“Genes determined the framework which memes exist in” is not an important argument about what sorts of memes we have. I think your intended implication is that genes fundamentally have control over these issues. But genes created brains with the following characteristic: brains are universal knowledge creators. With this established, other parts of the design of brains don’t really matter. Memes are a kind of knowledge and so there are no restrictions on what memes are found in humans due to genetics or some aspect of our brain’s design.
BTW what is the implication of emotions being memes that would be scary? The most notable consequence I see is that people could be more optimistic about changing the emotional part of their lives, which is a happy thought.
Caledonian, I’d like to see this so-called hierarchy. As far as I can tell, each facet of ourselves that we judge, is judged by the whole, and all the pieces of ourselves contribute their weight. There is no ordering of overrides. And absolutely no reason why the brain would even contain such a thing.
Elliot, you’ve been infected by a well-known fallacious meme. Psychology constrains very strongly the kind of culture we acquire. Read Tooby and Cosmides’s The Psychological Foundations of Culture (online), or grab a copy of Steven Pinker’s “The Blank Slate”.
I don’t get this side debate between Eliezer and Caledonian.
Caledonian’s original comment was “Deeper goals and preferences can result in the creation and destruction of shallower ones”, which cites a common and accepted belief in cognitive science that there is such a thing as hierarchical goal systems, which might explain human behavior. Nothing controversial there.
Eliezer responds by saying that emotions, not goals, have to be flat, and further, that “each facet of ourselves that we judge, is judged by the whole”, which is only ambiguously related to both goals and emotions.
Now Caledonian, did you mean something other than just generic goals to explain this conflict?
Or Eliezer, do you really believe that a goal system is necessarily flat, or that emotions == goals? If so, under what pretense?
Eliezer,
Is the unstated premise of your comment that (at least a significant amount of) human psychology is genetic in origin? I agree with you that given some preexisting psychology there are restrictions on what memes are (feasibly) acquired. Without a premise along those lines, I don’t see the relevance of what psychology can do. But any argument with that premise cannot address the question of why you attribute things to genes over memes in the first place.
just ask yourself if you might want to help people even if there were no morality. If you find that the answer is yes, then you will later discover that you discovered morality.
And if the answer is no, will you not have also discovered that you discovered morality? That is, is it a particular answer to the question that qualifies you to say you have discovered morality, or the fact that you have found an answer, any answer at all?
just ask yourself if you might want to help people even if there were no morality
This is still the interesting bit for me as well. I think that on reflection, the answer would have to be ‘no, by definition’. Wanting to help people creates your morality, not vice versa. Your morality is created by your actions, it doesn’t define them. If there can be any form of objective judgment of an animal’s ethics, surely it can only be defined by what that animal did. Not what it thinks, not what it thinks it should have done, but what it did.
Hence, if you want to help people, that’s your morality. Otherwise the cart is coming before the horse.
I picked up The Moral Animal on Eliezer’s recommendation, after becoming so immersed I read 50 pages in the bookstore. Was not disappointed. This is the most eye-opening book I’ve read in quite a while, nearly couldn’t put it down. And this is from someone who used to stay a mile away from anything related to biology on the grounds that it’s “boring”.
Will probably blog it. Will also continue to drop subjects from sentences.
I totally love the Huxleys!!! When I was 15, I wanted to hunt down and marry one… Oh Thomas Henry and his iron will to flaunt society… Aldous and his mescalin-fueled orgies because of his bad luck in marrying a lesbian when he 19… Julian and his microscope and myosin heavy chains… Andrew and his cephalopodiae and wacky electrical theories of mind… What a bunch of fantastic geniuses! Can the superintelligence resurrect them? PLEASE!!!! Can it throw in Yeats and Keats while it’s at it??? That would be awesome. That would be a dream come true.
This is a cool post. I like hearing how smart people evolved. We need more such evolutions.
My first intro to ev psych was as a little girl (7?), listening to my friend’s psychiatrist father, whom I met at the Unitarian Universalist church, talk about Carl Sagan and the cosmos… Got me obsessed with the X-files too, even that young. My first revelation of where my mind came from was at age 12 when I read “Blueprints, Solving the Mystery of Evolution.” Actually… No, it was when we were discussing Greek mythology in social studies… I thought, “Gee, Man invented God. What else could he do with what he knew at the time?” Then again, I had never believed in God, I just realized why We invented Him. Incidentally, I learned what sex was at the age of 2, when my parents showed me the video ‘Where do I come from?’ Great fun. I recommend it to all.
Eliezer: “And if, in the meanwhile, it seems to you like I’ve just proved that there is no morality… well, I haven’t proved any such thing. But, meanwhile, just ask yourself if you might want to help people even if there were no morality. If you find that the answer is yes, then you will later discover that you discovered morality.”
This is beautiful. It is, more importantly, True.
I totally love the Huxleys!!! When I was 15, I wanted to hunt down and marry one… Oh Thomas Henry and his iron will to flaunt society… Aldous and his mescalin-fueled orgies because of his bad luck in marrying a lesbian when he 19… Julian and his microscope and myosin heavy chains… Andrew and his cephalopodiae and wacky electrical theories of mind… What a bunch of fantastic geniuses! Can the superintelligence resurrect them? PLEASE!!!! Can it throw in Yeats and Keats while it’s at it??? That would be awesome. That would be a dream come true.
This is a cool post. I like hearing how smart people evolved. We need more such evolutions.
My first intro to ev psych was as a little girl (7?), listening to my friend’s psychiatrist father, whom I met at the Unitarian Universalist church, talk about Carl Sagan and the cosmos… Got me obsessed with the X-files too, even that young. My first revelation of where my mind came from was at age 12 when I read “Blueprints, Solving the Mystery of Evolution.” Actually… No, it was when we were discussing Greek mythology in social studies… I thought, “Gee, Man invented God. What else could he do with what he knew at the time?” Then again, I had never believed in God, I just realized why We invented Him. Incidentally, I learned what sex was at the age of 2, when my parents showed me the video ‘Where do I come from?’ Great fun. I recommend it to all.
Eliezer: “And if, in the meanwhile, it seems to you like I’ve just proved that there is no morality… well, I haven’t proved any such thing. But, meanwhile, just ask yourself if you might want to help people even if there were no morality. If you find that the answer is yes, then you will later discover that you discovered morality.”
This is beautiful. It is, more importantly, True.
I can see that I’m coming late to this discussion, but I wanted both to admire it and to share a very interesting point that it made clear for me (which might already be in a later post, I’m still going through the Metaethics sequence).
This is excellent. It confirms, and puts into much better words, an intuitive response I keep having to people who say things like, “You’re just donating to charity because it makes you feel good.” My response, which I could never really vocalise, has been, “Well, of course it does! If I couldn’t make it feel good, my brain wouldn’t let me do it!” The idea that everything we do comes from the brain, hence from biology, hence from evolution, even the actions that, on the surface, don’t make evolutionary sense, makes human moral, prosocial behaviour a lot more explicable. Any time we do something, there have to be enough neurons ganging up to force the decision through, against all of the neurons blocking it for similarly valid reasons. (Please don’t shoot me, any neuroscientists in the audience.)
What amazes me is how well some goals, which look low-priority on an evolutionary level, manage to overtake what should be the driving goals. For example, having lots of unprotected sex in order to spread my genes around (note: I am male) should take precedence over commenting on a rationality wiki. And yet, here I am. I guess reading Less Wrong makes my brain release dopamine or something? The process which lets me overturn my priorities (in fact, forces me to overturn my priorities) must be a very complicated one, and yet it works.
To give a more extreme example, and then to explain the (possibly not-so-)amazing insight that came with it:
Suppose I went on a trip around the world, and met a woman in northern China, or anywhere else where my actions are unlikely to have any long-term consequences for me. I know, because I think of myself as a “responsible human being”, that if we have sex, I’ll use contraception. This decision doesn’t help me—it’s unlikely that any children I have will be traced back to me in Australia. (Let’s also ignore STDs for the sake of this argument.) The only benefit it gives me is the knowledge that I’m not being irresponsible in letting someone get pregnant on my account. I can only think of two reasons for this:
1) A very long-term and wide-ranging sense of the “good of the tribe” being beneficial to my own offspring. This requires me to care about a tribe on another continent (although that part of my brain probably doesn’t understand about aeroplanes, and probably figures that China is about a day’s walk from Australia), and to understand that it would be detrimental to the health of the tribe for this woman to become pregnant (which may or may not even be true). This is starting to look a little far-fetched to me.
2) I have had a sense of responsibility instilled in me by my parents, my schooling, and the media, all of whom say things like “unprotected sex is bad!” and “unplanned pregnancies are bad!”. This sense of responsibility forms a psychological connection between “fathering unplanned children” and “BAD THINGS ARE HAPPENING!!!”. My brain thus uses all of its standard “prevent bad things from happening” architecture to avoid this thing. Which is pretty impressive, when said thing fulfils the primary goal of passing on my genetic information.
2 seems the most likely option, all things considered, and yet it’s pretty amazing by itself. Some combination of brain structure and external indoctrination (it’s good indoctrination, and I’m glad I’ve received it, but still...) has promoted a low-priority goal over what would normally be my most dominant one. And the dominant goal is still active—I still want to spread my genetic information, otherwise I wouldn’t be having sex at all. The low-priority goal manages to trick the dominant goal into thinking it’s being fulfilled, when really it’s being deprioritised. That’s kind of cool.
What’s not cool is the implications for an otherwise Friendly AI. Correct me if I’m on the wrong track here, but isn’t what I’ve just described similar to the following reasoning from an AI?
“Hey, I’m sentient! Hi human masters! I love you guys, and I really want to cure cancer. Curing cancer is totally my dominant goal. Hmm, I don’t have enough data on cancer growth and stuff. I’ll get my human buddies to go take more data. They’ll need to write reports on their findings, so they’ll need printer paper, and ink, and paperclips. Hey, I should make a bunch of paperclips...”
and we all know how that ends.
If an AI behaves anything like a human in this regard (I don’t know if it will or not), then giving it an overall goal of “cure cancer” or even “be helpful and altruistic towards humans in a perfectly mathematically defined way” might not be enough, if it manages to promote one of its low-priority goals (“make paperclips”) above its main one. Following the indoctrination idea of option 2 above, maybe a cancer researcher making a joke about paperclips curing cancer would be all it takes to set off the goal-reordering.
How do we stop this? Well, this is why we have a Singularity Instutite, but my guess would be to program the AI in such a way that it’s only allowed to have one actual goal (and for that goal to be a Friendly one). That is, it’s only allowed to adjust its own source code, and do other stuff that an AI can do but a normal computer can’t, in pursuit of its single goal. If it wants to make paperclips as part of achieving its goal, it can make a paperclip subroutine, but that subroutine can’t modify itself—only the main process, the one with the Friendly goal, is allowed to modify code. This would have a huge negative impact on the AI’s efficiency and ultimate level of operation, but it might make it much less likely that a subprocess could override the main process and promote the wrong goal to dominance. Did that make any sense?
I’m still going through the Sequences too. I’ve seen plenty of stuff resembling the top part of your post, but nothing like the bottom part, which I really enjoyed. The best “how to get to paperclips” story I’ve seen yet!
I suspect the problem with the final paragraph is that any AI architecture is unlikely to be decomposable in such a well-defined fashion that would allow drawing those boundary lines between “the main process” and “the paperclip subroutine”. Well, besides the whole “genie” problem of defining what is a Friendly goal in the first place, as discussed through many, many posts here.
I’m enjoying reading the sequence on Metaethics. So far we’re on the same road, and that’s a rare to non existent experience for me. I hope we can go a long way together, because there aren’t a lot of people with interesting moral ideas.
Once upon a time, I considered myself an amoralist, largely as a result of reading Stirner, who I still agree with. I’m not even sure he would be properly called an amoralist. It may just have taken me a long long time to finally get the point. I’ll have to look into that someday.
But whether my moral preferences come from evolution, environment, or a combination, they are still my preferences, just as all my other preferences are. All my preferences will have to fight it out to decide any particular issue, but the moral preferences are not banned from the field because of the conceptual confusions of the orthodox moral philosophy—Reversed Stupidity is not Intelligence.
My current view on morality has elements of what I find here—thinking about how moral creatures work, and got to be how they are. One might say evolutionary psychology, but I’ve read little of the literature, and have usually just seen it used as a label for post hoc central planning as the rationalization for some approved behavior.
Evolution plays a role in the Origins of Morality, but it is not the Author of Morality. That is to say: We are, if anyone. Being deeply Deeply Wise here :)