The Lens That Sees Its Flaws
Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.
Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not.
Mice can see, but they can’t understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous.
Mice see, but they don’t know they have visual cortexes, so they can’t correct for optical illusions. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains. Their camera does not take pictures of its own lens. But we, as humans, can look at a seemingly bizarre image, and realize that part of what we’re seeing is the lens itself. You don’t always have to believe your own eyes, but you have to realize that you have eyes—you must have distinct mental buckets for the map and the territory, for the senses and reality. Lest you think this a trivial ability, remember how rare it is in the animal kingdom.
The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world. It is the sort of thing mice would never invent. Pondering this business of “performing replicable experiments to falsify theories,” we can see why it works. Science is not a separate magisterium, far away from real life and the understanding of ordinary mortals. Science is not something that only applies to the inside of laboratories. Science, itself, is an understandable process-in-the-world that correlates brains with reality.
Science makes sense, when you think about it. But mice can’t think about thinking, which is why they don’t have Science. One should not overlook the wonder of this—or the potential power it bestows on us as individuals, not just scientific societies.
Admittedly, understanding the engine of thought may be a little more complicated than understanding a steam engine—but it is not a fundamentally different task.
Once upon a time, I went to EFNet’s #philosophy chatroom to ask, “Do you believe a nuclear war will occur in the next 20 years? If no, why not?” One person who answered the question said he didn’t expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.” “But why extend that out for 100 years?” I asked. “Pure hope,” was his reply.
Reflecting on this whole thought process, we can see why the thought of nuclear war makes the person unhappy, and we can see how his brain therefore rejects the belief. But if you imagine a billion worlds—Everett branches, or Tegmark duplicates1—this thought process will not systematically correlate optimists to branches in which no nuclear war occurs.2
To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes.
If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping engine that has flaws—then you can apply a reflective correction. The brain is a flawed lens through which to see reality. This is true of both mouse brains and human brains. But a human brain is a flawed lens that can understand its own flaws—its systematic errors, its biases—and apply second-order corrections to them. This, in practice, makes the lens far more powerful. Not perfect, but far more powerful.
1 Max Tegmark, “Parallel Universes,” in Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, ed. John D. Barrow, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cambridge University Press, 2004), 459–491, http://arxiv.org/abs/astro-ph/0302131.
2 Some clever fellow is bound to say, “Ah, but since I have hope, I’ll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility. So the two events are related after all.” At this point, we have to drag in Bayes’s Theorem and measure the relationship quantitatively. Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.
- Simulators by 2 Sep 2022 12:45 UTC; 609 points) (
- Feature Selection by 1 Nov 2021 0:22 UTC; 319 points) (
- What cognitive biases feel like from the inside by 3 Jan 2020 14:24 UTC; 253 points) (
- Less Wrong Rationality and Mainstream Philosophy by 20 Mar 2011 20:28 UTC; 163 points) (
- Philosophy: A Diseased Discipline by 28 Mar 2011 19:31 UTC; 150 points) (
- Where Recursive Justification Hits Bottom by 8 Jul 2008 10:16 UTC; 123 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- Belief in Self-Deception by 5 Mar 2009 15:20 UTC; 100 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Words as Hidden Inferences by 3 Feb 2008 23:36 UTC; 96 points) (
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- The “Intuitions” Behind “Utilitarianism” by 28 Jan 2008 16:29 UTC; 83 points) (
- My guess at Conjecture’s vision: triggering a narrative bifurcation by 6 Feb 2024 19:10 UTC; 75 points) (
- Circling as Cousin to Rationality by 1 Jan 2020 1:16 UTC; 72 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Being Wrong about Your Own Subjective Experience by 24 Apr 2011 20:24 UTC; 63 points) (
- The Allais Paradox by 19 Jan 2008 3:05 UTC; 62 points) (
- The Problem of the Criterion by 21 Jan 2021 15:05 UTC; 61 points) (
- Zut Allais! by 20 Jan 2008 3:18 UTC; 57 points) (
- Tendencies in reflective equilibrium by 20 Jul 2011 10:38 UTC; 51 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- First Lighthaven Sequences Reading Group by 28 Aug 2024 4:56 UTC; 45 points) (
- Reflections on the Metastrategies Workshop by 24 Oct 2024 18:30 UTC; 41 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Bayes’ Law is About Multiple Hypothesis Testing by 4 May 2018 5:31 UTC; 36 points) (
- What cognitive biases feel like from the inside by 27 Jul 2022 23:13 UTC; 33 points) (EA Forum;
- (Summary) Sequence Highlights—Thinking Better on Purpose by 2 Aug 2022 17:45 UTC; 33 points) (
- Distance Functions are Hard by 13 Aug 2019 17:33 UTC; 31 points) (
- 5 May 2020 20:57 UTC; 30 points) 's comment on A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble by (
- Problems Involving Abstraction? by 20 Oct 2020 16:49 UTC; 30 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- You Don’t Have To Click The Links by 11 Sep 2022 18:13 UTC; 25 points) (
- Entangled with Reality: The Shoelace Example by 25 Jun 2011 4:50 UTC; 25 points) (
- Why the Problem of the Criterion Matters by 30 Oct 2021 20:44 UTC; 24 points) (
- Why should I care about rationality? by 8 Dec 2018 3:49 UTC; 24 points) (
- 25 Jul 2013 7:20 UTC; 24 points) 's comment on Making Rationality General-Interest by (
- Hedging by 26 Aug 2016 8:34 UTC; 21 points) (
- Reflective Consequentialism by 18 Nov 2022 23:56 UTC; 21 points) (
- Many maps, Lightly held by 24 Apr 2019 2:25 UTC; 19 points) (
- Video: You Are Not So Smart by 8 Sep 2011 9:43 UTC; 16 points) (
- Map and Territory: Summary and Thoughts by 5 Dec 2020 8:21 UTC; 16 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- 31 Jan 2015 2:28 UTC; 16 points) 's comment on My Skepticism by (
- Does the “ancient wisdom” argument have any validity? If a particular teaching or tradition is old, to what extent does this make it more trustworthy? by 4 Nov 2024 15:20 UTC; 16 points) (
- On Privilege by 18 May 2024 22:36 UTC; 15 points) (
- 19 Dec 2012 9:01 UTC; 11 points) 's comment on Ontological Crisis in Humans by (
- 23 May 2022 10:19 UTC; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- Rationality Compendium: Principle 2 - You are implemented on a human brain by 29 Aug 2015 16:24 UTC; 11 points) (
- 3 Nov 2014 21:17 UTC; 10 points) 's comment on Open thread, Nov. 3 - Nov. 9, 2014 by (
- 11 Jun 2022 2:42 UTC; 10 points) 's comment on Another plausible scenario of AI risk: AI builds military infrastructure while collaborating with humans, defects later. by (
- How I’d Introduce LessWrong to an Outsider by 3 May 2017 4:32 UTC; 10 points) (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- [SEQ RERUN] The Lens That Sees Its Flaws by 6 Sep 2011 2:21 UTC; 8 points) (
- 16 May 2011 17:28 UTC; 7 points) 's comment on Conceptual Analysis and Moral Theory by (
- 1 Mar 2010 19:41 UTC; 7 points) 's comment on Open Thread: March 2010 by (
- Handedness Bias by 14 Mar 2011 14:49 UTC; 7 points) (
- 9 Sep 2011 6:00 UTC; 5 points) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 3 Dec 2010 16:03 UTC; 5 points) 's comment on Two publicity ideas by (
- 25 Nov 2010 15:37 UTC; 5 points) 's comment on The Cult Of Reason by (
- 26 Apr 2011 14:37 UTC; 5 points) 's comment on Mind Projection Fallacy by (
- 14 Apr 2023 7:54 UTC; 5 points) 's comment on What’s the difference between Wisdom and Rationality? by (
- 21 May 2009 5:05 UTC; 4 points) 's comment on Positive Bias Test (C++ program) by (
- Norfolk Social—VA Rationalists by 9 Oct 2022 23:58 UTC; 4 points) (
- 14 Sep 2012 1:26 UTC; 4 points) 's comment on The noncentral fallacy—the worst argument in the world? by (
- 27 Apr 2012 23:55 UTC; 4 points) 's comment on The Problem of Thinking Too Much [LINK] by (
- 9 May 2018 5:41 UTC; 3 points) 's comment on On Better Mental Representations Part I: Adopting ‘Thinking Tools’ by (
- 30 Aug 2013 22:19 UTC; 3 points) 's comment on Rewriting the sequences? by (
- 4 Aug 2021 8:45 UTC; 2 points) 's comment on What psychology studies are most important to the rationalist worldview? by (
- 3 Dec 2010 17:31 UTC; 2 points) 's comment on Two publicity ideas by (
- 9 Mar 2018 13:13 UTC; 2 points) 's comment on Friendship by (
- 6 Feb 2023 18:29 UTC; 2 points) 's comment on In defense of the MBTI by (
- 23 Apr 2009 16:21 UTC; 2 points) 's comment on Go Forth and Create the Art! by (
- 21 Sep 2011 10:58 UTC; 2 points) 's comment on Open Thread: September 2011 by (
- 15 Apr 2009 20:45 UTC; 2 points) 's comment on Actions and Words: Akrasia and the Fruit of Self-Knowledge by (
- 10 Oct 2012 2:43 UTC; 2 points) 's comment on Meta-rationality by (
- 24 Sep 2011 7:14 UTC; 2 points) 's comment on [SEQ RERUN] Cached Thoughts by (
- 認知バイアスは内部からどのように感じられるか by 17 Jul 2023 17:14 UTC; 1 point) (EA Forum;
- 28 Dec 2012 16:49 UTC; 1 point) 's comment on Godel’s Completeness and Incompleteness Theorems by (
- 23 Oct 2011 15:50 UTC; 1 point) 's comment on The Apologist and the Revolutionary by (
- 22 Oct 2009 21:33 UTC; 0 points) 's comment on Rationality Quotes: October 2009 by (
- 23 Apr 2012 21:52 UTC; 0 points) 's comment on Stupid Questions Open Thread Round 2 by (
- 28 Aug 2019 2:26 UTC; 0 points) 's comment on Don’t Pull a Broken Chain by (
- Friendly AI Society by 7 Mar 2012 19:31 UTC; -5 points) (
- Consider the Most Important Facts by 22 Jul 2013 20:39 UTC; -14 points) (
- Actually Rational & Kind Sequences Reading Group by 31 Aug 2024 4:21 UTC; -55 points) (
Eliezer, nice to read your opinions on a whole range of things in this post, but I think it would be more helpful to us for you to not state your opinions as fact (and to not state overcertainty about the existence and mechanics of various phenomena).
Can you be specific with your criticism?
Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.
Here is the secret of deliberate rationality—this whole entanglement process is not magic, and you can understand it.
But if we were minds in a vat, or cogs in the Matrix, we would still be able to reason rationally and make intelligent predictions about the world we see. And test them, and improve our predictions and discard the ones that are wrong. We can be rational about the real world, even if the real world is an illusion.
So I don’t see how we can found rationality on our understanding of the world (a world we only understand through reason). In this argument, where is the egg that was not born of a chicken?
HA, your objection is too vague for me to apply. Specify.
But if we were minds in a vat, or cogs in the Matrix, we would still be able to reason rationally and make intelligent predictions about the world we see. And test them, and improve our predictions and discard the ones that are wrong. We can be rational about the real world, even if the real world is an illusion.
I do not understand your bizarre concept, illusion. Whatever is, is real. Sometimes the underlying levels of organization are different from what you expected.
So I don’t see how we can found rationality on our understanding of the world (a world we only understand through reason). In this argument, where is the egg that was not born of a chicken?
That’s why I distinguished deliberate rationality. Seeing your shoelaces is also rational, for it produces beliefs that are themselves evidence; but it is not a process that requires deliberate control. The lens sees, even in mice; but only in humans does the lens see itself and see its flaws.
You are an Objectivist! Reading your posts is like reading Piekoff. Thank you for your work in correcting human error.
Oh, and if you aren’t an objectivist, give Atlas Shrugged a good, objective reading :) . You’ll be very surprised.
Related: Guardians of Ayn Rand (and, tangentially, In Defense of Ayn Rand).
Re: wishful thinking, I’ve personally seen this before, where people explicitly reject reason on an important topic; I knew a rabbi in Minnesota who insisted the Israeli-Palestinian peace process will succeed, simply because “it must succeed.” Usually people only explicitly reject reason on “one thought too many” topics like “I would never even think about betraying my friends”, but the wishful-thinking topics such as your nuclear-war example don’t seem to fit into this mold.
Anyone know what the research says on this? I know people faced with death will shift their values, but to what degree and in what directions do they shift their estimated probability of deaths and disasters when the disaster involves them or people they care about? And is this just part of a more general wishful-thinking bias? (Not that I know what the research says about wishful thinking, either.)
Conjecture: a New Yorker is more likely to see D.C. as the likely first target for a terrorist nuclear bomb, compared with a D.C. resident.
Eliezer,
Here’s a good example in your reply to Stuart: “but only in humans does the lens see itself and see its flaws”. Here I think, as in my previous critical post, that you’re “stat[ing] overcertainty about the existence and mechanics of various phenomena”.
One might say that writing these statements in a more provisional and tentative fashion, such as “As far as we can tell, some humans are the only things capable of analyzing flaws in their ability to to observe the universe, and pointing out this exceptionalist element about some humans is of use because of X” makes communication too cumbersome, and there’s no need to to say because such nuances are implied.
But I disagree. I think the overcertain style of writing you and some other commenters fall into is less helpful for discussing this stuff than a greater level of nuance, and framing ideas and knowledge more provisionally.
In short, I’m requesting greater transparency about our bounded rationality in your posts.
HA, it is indeed too cumbersome. See also Orwell’s “Politics and the English Language.”
Ad hominem tu quoque: You didn’t rewrite your own comment in the cumbersome style you wanted me to use. In fact, your initial comment was so extremely minimal that I couldn’t apply it, and not qualified at all.
The “lens” sees perhaps only parts of itself, and then perhaps only some of its flaws.
For hope to be useless, it requires the premise that God does not exist. If God exists, then the rational thing is to hope and not in just the improbable but the impossible.
As a Catholic, I am willing to abstain from food and sex at times. I even like to think that I would give my life for my faith. But you atheists are fanatical. Sacrificing hope is too hardcore. First you sacrifice faith, then hope, what’s next love?
If it is true that “if God exists, then the rational thing is to hope and not in just the improbable but the impossible”, then that fact is itself strong evidence against the existence of God.
But who said anything about sacrificing hope? Eliezer argues against wishful thinking, which is not at all the same thing as hope. Oh, and the idea that “faith, hope and love” are the same kind of thing—so much the same kind of thing that abandoning two of them would be likely to lead to abandoning the third—seems to me to have no support at all outside the First Letter to the Corinthians; why should Eliezer fear that abandoning faith and (what you rather bizarrely call) hope should lead to abandoning love?
Faith, hope and love are the Christian theological virtues. I would argue that they are at the core of what it is to live a fully human life. It looks like this website has rejected the theistic understanding of faith and hope. I don’t see what is stopping the rejection of love due to it being a strong biasing factor. I don’t know how you can love something without it making you biased towards it. To really be unbiased we should not love humanity and in so doing the logical conclusion is that man is insignificant. What we are does not matter in the scope of time and space. You may not like my conclusion but I don’t see how it does not follow from the atheistic premises that this website holds.
“I would argue that they are at the core of what it is to live a fully human life.”
A fully human life, in the natural sense of the term, has an average span of sixteen years. That’s the environment we were designed to live in- nasty, brutal, and full of misery. By the standards of a typical human tribe, the Holocaust would have been notable for killing such a remarkably small percentage of the population. Why on Earth would we want to follow that example?
“It looks like this website has rejected the theistic understanding of faith and hope.”
Yes, for a very good reason- it does not work. If you stand in front of a truck, and you have faith that the truck will not run you over, and you hope that the truck will not run you over, your bones and vital organs will be sliced and diced and chopped and fried. The key factor in survival is not lack of hope, or lack of faith, but lack of doing stupid things such as standing in front of trucks.
“I don’t know how you can love something without it making you biased towards it.”
This is not what we mean by “biased”. By “bias”, we mean bugs in the human brain which lead us to give wrong answers to simple questions of fact, such as “What is the probability of X?”. See http://www.acceleratingfuture.com/tom/?p=30.
What the heck, Humans who lived past infancy lived far longer than 16 years in the Ancestral environment—just very poor infant mortality brought down the average life expectancy.
“The typical human tribe” would not have gone around murdering whole other tribes… there is no evidence for that and that is not what modern isolated hunter gatherers do either.
Agreed on infant mortality: ‘life expectancy’ is an incredibly misleading term, and leads to any number of people thinking that anyone over 40 was an old man in previous centuries, when a lot of the difference can be explained by infant mortality.
On human tribes, I don’t think slaughtering an entire other tribe is a particularly shocking thing for a tribe to do. I’ve read things suggesting that 20th century rates of personal homicide and deaths in war per person are both actually low by previous centuries’ standards, so the popular idea of the Holocaust and Communist purges as making the 20th century the century of war or atrocity is flawed. But agreed this doesn’t make Holocausts ‘typical’.
Isn’t the 20th century’s apparent low death toll from homicide and war just a matter of percentages? The absolute number of deaths from these things is much greater in the 20th century. I think the absolute number matters too.
I came across plenty of examples in my studies of anthropology. Of course it depends what you mean by “tribe”. Really large scale violence requires a certain amount of technology. As an example”Yanomamo: The Fierce People” by Chagnon details some such incidents and suggests they were not unusual. Well actually the men and children were killed, the nubile women were kept alive for .
See also the Torah / Old Testament for numerous genocides, though these were bronze/iron age people and also the historicity of the incidents is disputed.
This was not universal—the Kalahari Bushmen (now called the San people) did not do this, perhaps in part because their main weapon was a slow acting poison dart. An all-out war would kill everyone involved.
But rates of violent death among males were extremely high in hunter/gatherer societies often documented by early anthropologists (from reconstructing family trees) in the 30-50% range.
What about other religions? Islam and Judaism come to mind, but there are also non-abrahamic religions that advocate faith, hope and love. Why is are you exclusively a Christian and not a Muslim, a Jew, a Buddhist or a Pagan? Why are you a Catholic instead of a Protestant? If you were born in China in the early 20th century, would you be a Catholic? If so, why? If not, why are you a Catholic here and now?
In a lot of ways we don’t have a shared vocabulary. When I said fully human life I was not using this in the natural sense. Our understanding of humanity is different. I see man as made in the image of God. You see man as just another animal that is a product of evolutionary mechanisms. I guess the closest secular term that I can use to convey what I am saying is Maslow’s self actualization but transcendent.
Sure God is not going to change natural law just because we are putting him to the test. Twelve poor followers of Christ were able to convert the Roman empire. I have a hard time believing the virtue of hope was not involved. I could go into the lives of the saints for other examples but I wont.
You call the getting to the probability of nuclear war a simple question? Why doesn’t love lead to bugs in the human brain that leads us to wrong answers?
Because the Catholic Faith is true. But this is getting off topic.
Cure, you’re making too many comments. A good rule of thumb is that you should never have made more than 3 and preferably 2 of the 10 most recent comments. You’ve made it clear what you believe; everyone knows you’re a Catholic now; you do not need to repeat it further.
“I see man as made in the image of God.”
This does make some sense. If man is made in the image of God, and we know God is a mass murderer, then we can predict that some men will also be mass murderers. And lo, we have plenty of examples- Hitler, Stalin, Mao, Pol Pot, etc.
“Sure God is not going to change natural law just because we are putting him to the test.”
If God does exist, as soon as we finish saving the world and whatnot, he should be immediately arrested and put on trial for crimes against humanity, due to his failure to intervene in the Holocaust, the smallpox epidemics, WWI, etc.
“Twelve poor followers of Christ were able to convert the Roman empire.”
Aye. And Karl Marx must have had divine powers too- how else could a single person, with no political authority, cause a succession of revolutions in some of the largest countries on Earth?
“I could go into the lives of the saints for other examples but I wont.”
How do you know that large parts of their lives weren’t simply made up?
“You call the getting to the probability of nuclear war a simple question?”
Read the literature on heuristics and biases- researchers deliberately use simple questions with factual answers, so that the data unambiguously show the flaws in human reasoning.
CoA, if you “would argue that [faith, hope and love] are at the core of what it is to live a fully human life” then why don’t you, rather than just asserting it? (Or, if the argument you’d make is much too long and convoluted, point us to somewhere where it’s made in a non-question-begging way.)
“This website” doesn’t reject anything. It can’t. It’s only a website. A lot of the posters and commenters here disagree with “the theistic understanding of faith and hope”, but people who think otherwise aren’t forbidden to contribute or anything.
Tom, CoA isn’t saying “the apostles converted everyone to Christianity, so it must have been a miracle” (though he may well believe it); he’s saying “Christianity took over much of the world from tiny beginnings; it seems likely that the people involved were more optimistic than the evidence would have seemed to warrant”. He’s probably right about that (see “Small Business Overconfidence”). The same is surely true of at least some of the people involved in the rise of communism. Optimism beyond the evidence probably is an advantage, if your goal is to have a belief that isn’t well supported by the evidence become hugely popular and influential. Demagogues and revolutionaries and medical quacks all tend to be optimistic beyond the evidence.
Well, yes. But scientists need to have optimism that their experiments will lead somewhere, entrepeneurs have to be optimistic about there projects (and I’m optimistic that this remark will not get me kicked off this site). Without optimism great projects would not be undertaken.
If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping-engine with flaws in it—then you can apply a reflective correction.
And what is more, you’ll be a man, my son.
Rationalist Snow White: “Mirror, mirror on the wall, do I have any imperfections I have not yet realized on my own?”
Mirror, mirror, what am I missing that’s perfectly obvious to some people?
Mirror, mirror, where do I need to look that’s completely non-obvious?
“Mirror, mirror on the wall, how long is this stick?”
Rotates the stick 90 degrees
“Mirror, mirror on the wall, how long is this stick?”
Quite. I encounter a lot of people with this mindset; they hold to a belief because it makes them happier to, and they prefer to be happy and overly optimistic than realistic and disappointed. Having the self-awareness to realize that’s exactly what they’re doing is somewhat rarer, perhaps because the awareness makes the illusory belief harder to hold to (it starts to take on characteristics of belief-in-belief?)
The maximization of happiness is, of course, a legitimate value to pursue, but not at the expense of the accuracy of the map. That causes more problems than it solves. And for the notion that our optimist is better off with his or her particular rose-tinted glasses on, there’s always the Litany of Gendlin
I don’t think it’s necessary for each individual to be aware of their own irrationality or try to become more rational or what have you. You don’t have to have any formal study in physics to be great at pool, and you don’t need formal study in rationality to do well in life or even science specifically. Any flaws in the ability of some individuals to act “rationally” won’t matter in the aggregate because just a small number of people can profit heavily from the economic rent this will leave (in proportion to how much it actually matters) and in the process fix this efficiency.
“I don’t think it’s necessary for each individual to be aware of their own irrationality or try to become more rational or what have you.” Necessary? True. Human civilisation has progressed quite far without rationality taking an obvious, prominent stand at it’s forefront. I wouldn’t even say that making rationality worldwide would make life for the average human easier enough to use such a stance as marketing for rationality. But, you are forgetting a rather, in my opinion, obvious benefit of rational thinking: the efficiency of rationality. Suppose I am confronted with a man who was raised believing bananas induce insanity. How can I convince him otherwise? If neither of us are advocates of rational thinking, it could devolve into a shouting match with both of us believing the other completely insane. This is speculation, here. If I’m an advocate of rational thinking, I might suggest experimenting with feeding bananas to previously confirmed sane people as a way to prove him wrong, if I don’t think there’s a chance of him being right. This taking more time than a shouting match. If I decide to approach the issue with the caution of a scientist, I’d need to approach the issue slowly and cautiously, because I’d need him to monitor my experiments, taking even more time than a shouting match. If we are BOTH rational thinkers, a simple discussion about how many billions of people should be somewhere between frothing at the mouth to ticking homicidal time-bombs(depending upon his personal definition of insanity), taking about the same time as a shouting match. And(hopefully!)leaving him with the conclusion that bananas do NOT induce insanity. I dare you to argue with rationality’s efficiency.
I disagree with the notion that the ability to distinguish the map and the territory separates humans from other animals. Consider this: I am nearsighted. When I look a sign from far away, I can’t make out the letters. However, when I look at a human from a similar distance, I can recognize the face. Clearly my facial recognition system has adaptions for working with nearsighted eyes. A lens that can see its own flaws. And this couldn’t have evolved only in humans. Mice probably have similar adaptions.
And think about this optical illusion: Nearby objects look bigger than distant objects. Yet we don’t think this as an illusion at all, because we are so good at adjusting to it.
What about this: we have mechanisms to make proteins based on DNA sequences, but do we have any mechanisms for telling weather we have the right DNA sequence? Yes we do. Nearly every organism has error-correcting processes right after replication (where errors are most likely to be created), and many ways to avoid getting viruses to fool them.
In none of these cases does the organism make a theory about how their lens is flawed, and then correct themselves based on the theory. But here the difference is not in seeing flaws, but in that humans make theories to a much higher amount of sophistication than other animals.
The DNA replication mechanism relies on proofreading each segment right after it has been appended to the new copy. If the newly added segment differs from the base, it would be corrected, before the process moves to the next segment. It’s a hard-coded biological mechanism, occurring locally within a cell. [1]
What’s uniquely human in this argument is the ability to apply a corrective mechanism on the logical—or epistemological—level. The mechanism itself must be grounded in physical processes happening within our bodies and extends to the realm of thoughts. Humans, through evolving culture, found out that there is an innate bias, and then realized that we can make better predictions about the world if we compensate for it. That’s what (I think) Eliezer meant by applying second-order corrective error to the first-order thoughts. The models of our physiology and mental processes produce an estimation of that error—the more accurate the model, the better the estimation of the corrective error, and finally, the more objective view of reality. The corrective mechanisms on the cellular- or tissue- or organ-level are present across the whole animal kingdom. In fact, they are the basis of life, but they are not what this article is about.
Setting this distinction aside, do we actually have any evidence of thinking about thinking being a uniquely human ability? Without doing the heavy lifting of investigating the corpus of data, I’d imagine this ability lives on a spectrum with some of the other species showing at least a minimal degree of self-reflection. My intuition is that a second-order correction wouldn’t be possible without linguistic and symbolic capabilities, and traces of these are also present in other animals—like dolphins.
[1] - https://bio.libretexts.org/Bookshelves/Introductory_and_General_Biology/Map%3A_Raven_Biology_12th_Edition/14%3A_DNA-_The_Genetic_Material/14.06%3A_DNA_Repair
I think you have a point, but I’m not sure about your examples:
The facial recognition system is working with poor information from the eye, but it is not a part of it; it cannot correct for flaws in itself.
We evolved to do so. There is error correction, yes, but it is fixed; when this misleads us it does not fix itself. (Or does it? Our sensory systems are absurdly adaptable, I wouldn’t be surprised. If so, that would be a good example.)
Ayn Randians extend that to everything. According to them,the bent stick illusion isn’t an illusion because sticks are supposed to look bent in water.
Can you taboo “supposed” there?
(Also, I think they’re called “Objectivists”)
Not everyone gets the big-O small-o distinction.
It’s not really my theory, but O-ists define “illiusion” in such a way that only miraculius exceptions to the laws of nature could be iillusions.
Eliezer (if you see this): is there a reason you feel the need to talk about Everett branches or Tegmark duplicates every time you speak about the interpretation of probability, or is it just a physically realisable way to talk about an ensemble? (Though I’m not sure if you can call them physically realisable if you can’t observe them.)
I only recently got involved with LessWrong, and I’d like to explicitly point out that this is a tangent. I made this account to make an observation about the following passage:
First, let me say that I agree with your dismissal of the instance, but I think the idea suggests another argument that is interesting and somewhat related. While the accuracy of an estimate relies very little upon an individual’s beliefs or actions, similar to explanations of the Prisoner’s Dilemma or why an individual should vote, I can see a reasonable argument that a person’s beliefs can represent a class of individuals that can actually affect probabilities.
Arguing that hope makes the world better and so staves off war still seems silly, as the effect would still likely be very small, and instead I argue from the perspective of “reasonableness” of actions. I read “pure hope” as revealing a kind of desperation, representing an unwillingness to consider nuclear war a reasonable action in nearly any circumstance. A wide-spread belief that nuclear war is an unreasonable action would certainly affect the probability of a nuclear war occurring, both for political reasons (fallout over such a war) and statistical ones (government officials are drawn from the population), and so such a belief could actually have a noticeable effect on the possibility of a nuclear war. Furthermore, it can be argued that, for a flesh-and-blood emotional being with a flawed lens, viewing a result as likely could make it seem less unreasonable (more reasonable). As such, one possible argument for why nuclear war may happen later than earlier would look like: Nuclear war is widely regarded as an unreasonable action to take, and the clear potential danger of nuclear war makes this view unlikely to change in the forseeable future.
Following this, an argument that it is beneficial to believe that nuclear war will happen later: Believing that nuclear war is likely could erode the seeming “unreasonableness” of the action, which would increase the likelihood of such a result. As a representative of a class of individuals who are thus affected, I should therefore believe nuclear war is unlikely, so as to make it less likely.
I am not claiming I believe the conclusions of this argument, only that I found the argument interesting and wanted to share it. The second argument is also not an argument for why it is unlikely, and is rather an argument for why to believe it is unlikely, independent of actual likelihood, which is obviously something a perfect rationalist should never endorse (and why the argument relies on not being a perfect rationalist). If anyone is interested in making them, I’d like to hear any rebuttals. Personally, I find the “belief erodes unreasonableness” part the most suspect, but I can’t quite figure out how to argue against it without essentially saying “you should be a better rationalist, then”.
What evidence is there for mice being unable to think about thinking? Due to the communication issues, mice can’t say if they can think about thinking or not.
drag in Bayes’s Theorem and ; the link was moved to http://yudkowsky.net/rational/bayes/, but Eliezer seems to suggest https://arbital.com/p/bayes_rule/?l=1zq over it. (and it’s really really good)
Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.
I only need to assume that everybody else or, at least, many other people are similarly irrationally optimistic as I and then the effect of optimism on the world could well be significant and make a 20% change? The assumption is not at all far fetched.
I’ve probably committed a felony by doing this, but I’m going to post a rebuttal written by GPT-4, and my commentary on it. I’m a former debate competitor and judge, and have found GPT-4 to be uncannily good at debate rebuttals. So here is what it came up with, and my comments. I think this is a relevant comment, because I think what GPT-4 has to say is very human-relevant.
I find this mirroring from “deliberate rationality” to “intuitive wisdom” and equating cognition to instinct very interesting and not at all obvious, even from the perspective of a human former debate judge. It’s a great rebuttal IMO. It points out human deficiencies in our inability to detect cheese, which is arguably more important to mice than their ability to philosophise.
IMO another interesting insight—what does “can’t understand seeing” from the human even mean? I’d count this as another decent debate rebuttal probably +1 for team mouse.
If I was judging a debate between mice and humans, I would score this for the mice. The human is arguing “But we, as humans, can look at a seemingly bizarre image, and realize that part of what we’re seeing is the lens itself” whereas the mouse is arguing that their abilities are in tune with survival over theory, and how deficient that balance sometimes is among humans. I like this counter-argument for the mice, practicality over self-perceived superiority. Empathising other species-centric values is something that even my more philosophical human friends struggle with.
Lots of parrotting here, but the switch from “inside laboratories” to “the wilderness”, and the argument that instinct is a better alignment strategy than science are both very interesting to me. I wouldn’t award any points here, pending more arguments.
I found this quote inspiring, if I was a mouse or other animal. I may have to print a “mouse power” t-shirt.
A mouse’s instinct is being equivalized to a steam engine, interesting pivot but the contrasting statements still hold water, compared to the original, IMO.
Very much parrotting here, but I would note “manipulating your mental map” as a counterpoint to “tampering with mental paintbrushes” is an interesting equivalency. I also respect the re-framing of hope as a human flaw, in contrast with the reality-based instincts of a mouse.
Arguing for efficiency over power, and reality over perception, is an argument that would be an interesting avenue to be pursued as a debate judge. As well as the concept of a mouse brain being flawless, as an argument presented by an AI.
At the above paragraph, it ran out of tokens after “—without”, so I prompted it “That’s great, please finish the essay.” and everything after that (above and below) were what followed.
As a debate judge, pretty decent summary of key rebuttals.
A solid foundational rebuttal of the type I would have used back in my days of competitive debate. Probably better than anything I would have written on the fly.
Great re-framing of a debate (efficiency vs power or creativity).
For a formal debate, I would rate GPT-4′s rebuttal very high in a real-world “humans vs mice” debate scenario. The outcome of Eliezer vs Team Mouse would almost certainly come down to delivery, given the well-reasoned arguments on both sides given above. Overall, well above the quality of argument I would expect from top-tier debate teams at the high school level, and above average for the college level.
I’ve experimented with doing Lincoln-Douglas style debates with multiple GPT-powered “speakers” with different “personalities”, and it’s super interesting and a great brainstorming tool. Overall I consider GPT-4 to be vastly superior to the average twelfth-grader in general purpose argumentative debating, when prompted correctly.
Hopefully this is constructive and helps people get back to the basics—questioning human-centric thinking, trying to understand what alien intelligence may look like, and how it may challenge entrenched human biases!
Let’s see—in this post the author thinks about thinking about thinking; so: third order, right? And this comment: fourth?