Cached Thoughts
One of the single greatest puzzles about the human brain is how the damn thing works at all when most neurons fire 10–20 times per second, or 200Hz tops. In neurology, the “hundred-step rule” is that any postulated operation has to complete in at most 100 sequential steps—you can be as parallel as you like, but you can’t postulate more than 100 (preferably fewer) neural spikes one after the other.
Can you imagine having to program using 100Hz CPUs, no matter how many of them you had? You’d also need a hundred billion processors just to get anything done in realtime.
If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. And it’s a very neural idiom—recognition, association, completing the pattern.
It’s a good guess that the actual majority of human cognition consists of cache lookups.
This thought does tend to go through my mind at certain times.
There was a wonderfully illustrative story which I thought I had bookmarked, but couldn’t re-find: it was the story of a man whose know-it-all neighbor had once claimed in passing that the best way to remove a chimney from your house was to knock out the fireplace, wait for the bricks to drop down one level, knock out those bricks, and repeat until the chimney was gone. Years later, when the man wanted to remove his own chimney, this cached thought was lurking, waiting to pounce . . .
As the man noted afterward—you can guess it didn’t go well—his neighbor was not particularly knowledgeable in these matters, not a trusted source. If he’d questioned the idea, he probably would have realized it was a poor one. Some cache hits we’d be better off recomputing. But the brain completes the pattern automatically—and if you don’t consciously realize the pattern needs correction, you’ll be left with a completed pattern.
I suspect that if the thought had occurred to the man himself—if he’d personally had this bright idea for how to remove a chimney—he would have examined the idea more critically. But if someone else has already thought an idea through, you can save on computing power by caching their conclusion—right?
In modern civilization particularly, no one can think fast enough to think their own thoughts. If I’d been abandoned in the woods as an infant, raised by wolves or silent robots, I would scarcely be recognizable as human. No one can think fast enough to recapitulate the wisdom of a hunter-gatherer tribe in one lifetime, starting from scratch. As for the wisdom of a literate civilization, forget it.
But the flip side of this is that I continually see people who aspire to critical thinking, repeating back cached thoughts which were not invented by critical thinkers.
A good example is the skeptic who concedes, “Well, you can’t prove or disprove a religion by factual evidence.” As I have pointed out elsewhere,1 this is simply false as probability theory. And it is also simply false relative to the real psychology of religion—a few centuries ago, saying this would have gotten you burned at the stake. A mother whose daughter has cancer prays, “God, please heal my daughter,” not, “Dear God, I know that religions are not allowed to have any falsifiable consequences, which means that you can’t possibly heal my daughter, so . . . well, basically, I’m praying to make myself feel better, instead of doing something that could actually help my daughter.”
But people read “You can’t prove or disprove a religion by factual evidence,” and then, the next time they see a piece of evidence disproving a religion, their brain completes the pattern. Even some atheists repeat this absurdity without hesitation. If they’d thought of the idea themselves, rather than hearing it from someone else, they would have been more skeptical.
Death. Complete the pattern: “Death gives meaning to life.”
It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.
What patterns are being completed, inside your mind, that you never chose to be there?
Rationality. Complete the pattern: “Love isn’t rational.”
If this idea had suddenly occurred to you personally, as an entirely new thought, how would you examine it critically? I know what I would say, but what would you? It can be hard to see with fresh eyes. Try to keep your mind from completing the pattern in the standard, unsurprising, already-known way. It may be that there is no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in the answer automatically.
Now that you’ve read this, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!
1See ’Religion’s Claim to be Non-Disprovable,” in Map and Territory.
- Learned Blankness by 18 Apr 2011 18:55 UTC; 260 points) (
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Cached Selves by 22 Mar 2009 19:34 UTC; 214 points) (
- Crisis of Faith by 10 Oct 2008 22:08 UTC; 175 points) (
- Philosophical Landmines by 8 Feb 2013 21:22 UTC; 165 points) (
- Einstein’s Superpowers by 30 May 2008 6:40 UTC; 118 points) (
- How to Seem (and Be) Deep by 14 Oct 2007 18:13 UTC; 115 points) (
- Circumventing interpretability: How to defeat mind-readers by 14 Jul 2022 16:59 UTC; 114 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- The Simple Math of Everything by 17 Nov 2007 22:42 UTC; 94 points) (
- The “Outside the Box” Box by 12 Oct 2007 22:50 UTC; 94 points) (
- Fake Reductionism by 17 Mar 2008 22:49 UTC; 93 points) (
- Statistical Prediction Rules Out-Perform Expert Human Judgments by 18 Jan 2011 3:19 UTC; 92 points) (
- Replace the Symbol with the Substance by 16 Feb 2008 18:12 UTC; 88 points) (
- Subagents, neural Turing machines, thought selection, and blindspots by 6 Aug 2019 21:15 UTC; 87 points) (
- Use curiosity by 25 Feb 2011 22:23 UTC; 85 points) (
- Church vs. Taskforce by 28 Mar 2009 9:23 UTC; 83 points) (
- EA: A More Powerful Future Than Expected? by 15 Apr 2022 19:00 UTC; 82 points) (EA Forum;
- What Would You Do Without Morality? by 29 Jun 2008 5:07 UTC; 76 points) (
- MrBeast’s Squid Game Tricked Me by 3 Dec 2022 5:50 UTC; 75 points) (
- My Naturalistic Awakening by 25 Sep 2008 6:58 UTC; 73 points) (
- Where Physics Meets Experience by 25 Apr 2008 4:58 UTC; 73 points) (
- 11 core rationalist skills by 2 Dec 2009 8:09 UTC; 72 points) (
- On the construction of the self by 29 May 2020 13:04 UTC; 71 points) (
- My story / owning one’s reasons by 7 Jan 2011 0:17 UTC; 70 points) (
- Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by 5 Nov 2011 11:06 UTC; 69 points) (
- Rationality Lessons Learned from Irrational Adventures in Romance by 4 Oct 2011 2:45 UTC; 69 points) (
- A List of Nuances by 10 Nov 2014 5:02 UTC; 67 points) (
- Prolegomena to a Theory of Fun by 17 Dec 2008 23:33 UTC; 67 points) (
- Raised in Technophilia by 17 Sep 2008 2:06 UTC; 67 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Bayesian updating in real life is mostly about understanding your hypotheses by 1 Jan 2024 0:10 UTC; 63 points) (
- Is Humanism A Religion-Substitute? by 26 Mar 2008 4:18 UTC; 62 points) (
- Timeless Identity by 3 Jun 2008 8:16 UTC; 61 points) (
- The Point of Easy Progress by 28 Mar 2021 16:38 UTC; 59 points) (
- SotW: Check Consequentialism by 29 Mar 2012 1:35 UTC; 58 points) (
- From self to craving (three characteristics series) by 22 May 2020 12:16 UTC; 57 points) (
- Heat vs. Motion by 1 Apr 2008 3:55 UTC; 53 points) (
- Leveling Up in Rationality: A Personal Journey by 17 Jan 2012 11:02 UTC; 51 points) (
- Building Weirdtopia by 12 Jan 2009 20:35 UTC; 50 points) (
- How Much Thought by 12 Apr 2009 4:56 UTC; 49 points) (
- Understanding vipassana meditation by 3 Oct 2010 18:12 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- The End (of Sequences) by 27 Apr 2009 21:07 UTC; 46 points) (
- Logical or Connectionist AI? by 17 Nov 2008 8:03 UTC; 46 points) (
- Let Your Mind Be Not Fixed by 31 Jul 2020 17:54 UTC; 46 points) (
- Cached Procrastination by 25 Apr 2009 16:22 UTC; 43 points) (
- False Laughter by 22 Dec 2007 6:03 UTC; 42 points) (
- What data generated that thought? by 26 Apr 2011 12:54 UTC; 42 points) (
- New Post version 1 (please read this ONLY if your last name beings with a–k) by 27 Jul 2011 21:57 UTC; 40 points) (
- Not Taking Over the World by 15 Dec 2008 22:18 UTC; 40 points) (
- Fundamental Doubts by 12 Jul 2008 5:21 UTC; 38 points) (
- 2 Jul 2012 21:53 UTC; 37 points) 's comment on Rationality Quotes July 2012 by (
- Lighthaven Sequences Reading Group #4 (Tuesday 10/01) by 25 Sep 2024 5:48 UTC; 36 points) (
- Being Foreign and Being Sane by 25 May 2013 0:58 UTC; 35 points) (
- Basics of Handling Disagreements with People by 12 Nov 2024 17:55 UTC; 34 points) (
- 8 Mar 2022 13:54 UTC; 34 points) 's comment on March 2022 Welcome & Open Thread by (
- Hard Takeoff by 2 Dec 2008 20:44 UTC; 34 points) (
- Coding Rationally—Test Driven Development by 1 Oct 2010 15:20 UTC; 33 points) (
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- “I know I’m biased, but...” by 10 May 2011 20:03 UTC; 32 points) (
- When did Eliezer Yudkowsky change his mind about neural networks? by 14 Nov 2023 21:24 UTC; 31 points) (
- 9 May 2012 17:44 UTC; 31 points) 's comment on Neil deGrasse Tyson on Cryonics by (
- How to enjoy being wrong by 27 Jul 2011 5:48 UTC; 30 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- On Platitudes by 22 Apr 2020 5:55 UTC; 28 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 26 points) (
- (Moral) Truth in Fiction? by 9 Feb 2009 17:26 UTC; 25 points) (
- 28 Sep 2020 5:13 UTC; 23 points) 's comment on Blog posts as epistemic trust builders by (
- Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning by 29 Dec 2011 13:09 UTC; 22 points) (
- Dialectical Bootstrapping by 13 Mar 2009 17:10 UTC; 22 points) (
- 1 Apr 2015 13:07 UTC; 19 points) 's comment on How has lesswrong changed your life? by (
- Being Wrong Doesn’t Mean You’re Stupid and Bad (Probably) by 29 Jun 2019 23:58 UTC; 19 points) (
- Doxa, Episteme, and Gnosis Revisited by 20 Nov 2019 19:35 UTC; 19 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- 11 Nov 2013 11:48 UTC; 18 points) 's comment on On learning difficult things by (
- Hardware is already ready for the singularity. Algorithm knowledge is the only barrier. by 30 Mar 2021 22:48 UTC; 17 points) (
- Basics of Handling Disagreements with People by 12 Nov 2024 17:55 UTC; 16 points) (EA Forum;
- 10 Apr 2009 14:38 UTC; 15 points) 's comment on Beware of Other-Optimizing by (
- Detachment vs attachment [AI risk and mental health] by 15 Jan 2024 0:41 UTC; 14 points) (
- Caching on Success by 4 Sep 2019 6:34 UTC; 14 points) (
- 22 Sep 2019 18:43 UTC; 13 points) 's comment on The Zettelkasten Method by (
- 13 Mar 2009 17:07 UTC; 12 points) 's comment on So you say you’re an altruist... by (
- 5 May 2011 18:00 UTC; 12 points) 's comment on Your Evolved Intuitions by (
- Beware the suboptimal routine by 10 Jan 2024 19:02 UTC; 12 points) (
- [SEQ RERUN] Cached Thoughts by 24 Sep 2011 3:24 UTC; 12 points) (
- 23 May 2022 10:19 UTC; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- 3 Apr 2023 18:21 UTC; 11 points) 's comment on The “Outside the Box” Box by (
- Book review and policy discussion: diversity and complexity by 17 Apr 2021 11:36 UTC; 11 points) (
- Looking before leaping by 30 Apr 2024 3:46 UTC; 10 points) (EA Forum;
- Ikaxas’ Hammertime Final Exam by 1 May 2018 3:30 UTC; 10 points) (
- Write to Think by 12 Jan 2023 0:33 UTC; 10 points) (
- Lighthaven Sequences Reading Group #12 (Tuesday 11/26) by 20 Nov 2024 4:44 UTC; 10 points) (
- The Mind Is Not Designed For Thinking by 26 Mar 2009 21:57 UTC; 9 points) (
- 2 Mar 2018 19:31 UTC; 9 points) 's comment on Hazard’s Shortform Feed by (
- 12 Sep 2013 14:36 UTC; 9 points) 's comment on A concise version of “Twelve Virtues of Rationality”, with Anki deck by (
- 12 May 2011 2:19 UTC; 9 points) 's comment on Personal Benefits from Rationality by (
- 19 Jun 2010 10:51 UTC; 9 points) 's comment on Rationality quotes: June 2010 by (
- 9 Apr 2022 7:25 UTC; 9 points) 's comment on [RETRACTED] It’s time for EA leadership to pull the short-timelines fire alarm. by (
- Detachment vs attachment [AI risk and mental health] by 15 Jan 2024 0:38 UTC; 8 points) (EA Forum;
- Rationality Reading Group: Part I: Seeing with Fresh Eyes by 9 Sep 2015 23:40 UTC; 8 points) (
- 20 Apr 2022 22:57 UTC; 8 points) 's comment on A very quick analogy regarding “opinions” by (
- 2 Apr 2013 18:32 UTC; 8 points) 's comment on Explaining vs. Explaining Away by (
- Agency and Life Domains by 16 Nov 2014 1:38 UTC; 8 points) (
- 10 Feb 2017 6:40 UTC; 7 points) 's comment on The Social Substrate by (
- 4 May 2023 23:54 UTC; 7 points) 's comment on Open & Welcome Thread—May 2023 by (
- 25 Jan 2011 17:52 UTC; 7 points) 's comment on Intrapersonal negotiation by (
- 18 Jul 2012 13:33 UTC; 6 points) 's comment on Eliezer apparently wrong about higgs boson by (
- 17 Apr 2015 19:48 UTC; 6 points) 's comment on Rationality Reading Group: Introduction and A: Predictably Wrong by (
- Flinches by 3 Jan 2022 23:51 UTC; 6 points) (
- 1 Apr 2009 16:03 UTC; 6 points) 's comment on Proverbs and Cached Judgments: the Rolling Stone by (
- 16 Dec 2010 19:01 UTC; 6 points) 's comment on Expansion of “Cached thought” wiki entry by (
- 8 Feb 2010 19:48 UTC; 5 points) 's comment on Epistemic Luck by (
- 18 Dec 2013 0:16 UTC; 5 points) 's comment on Open thread for December 17-23, 2013 by (
- 30 Jan 2011 2:21 UTC; 5 points) 's comment on “Manna” by Marshall Brain by (
- 18 Oct 2011 21:58 UTC; 5 points) 's comment on How to understand people better by (
- 17 May 2011 17:20 UTC; 5 points) 's comment on Rationality Boot Camp by (
- 14 Apr 2021 22:11 UTC; 5 points) 's comment on Auctioning Off the Top Slot in Your Reading List by (
- 20 Mar 2013 19:22 UTC; 4 points) 's comment on Don’t Get Offended by (
- Meetup : Mountain View sequences discussion by 18 May 2012 19:33 UTC; 4 points) (
- 29 Sep 2010 19:07 UTC; 4 points) 's comment on Request for rough draft review: Navigating Identityspace by (
- Meetup : Madison: Reading Group, Seeing with Fresh Eyes by 12 Sep 2012 2:56 UTC; 4 points) (
- 23 Feb 2013 11:18 UTC; 4 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 4 Jan 2011 0:45 UTC; 4 points) 's comment on Kicking Akrasia: Now or Never by (
- Surface Thoughts Suck by 2 Nov 2020 13:24 UTC; 4 points) (
- “Irrationality in Argument” by 17 Dec 2010 5:46 UTC; 4 points) (
- 9 Dec 2012 22:39 UTC; 4 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- 8 Mar 2013 22:30 UTC; 3 points) 's comment on Don’t Get Offended by (
- 27 Sep 2011 23:57 UTC; 3 points) 's comment on Stanislav Petrov Day by (
- 27 Feb 2013 22:32 UTC; 3 points) 's comment on Need some psychology advice by (
- 22 May 2012 22:33 UTC; 3 points) 's comment on [SEQ RERUN] Class Project by (
- 18 Apr 2012 13:57 UTC; 3 points) 's comment on Avoiding Your Belief’s Real Weak Points by (
- Discuss: Original Seeing Practices by 16 Oct 2010 2:20 UTC; 3 points) (
- 19 Sep 2010 17:27 UTC; 3 points) 's comment on Church vs. Taskforce by (
- 4 May 2011 17:42 UTC; 2 points) 's comment on Rationality Quotes: May 2011 by (
- 17 Apr 2009 12:03 UTC; 2 points) 's comment on Tell Your Rationalist Origin Story by (
- 27 Oct 2014 20:40 UTC; 2 points) 's comment on Open thread, Oct. 27 - Nov. 2, 2014 by (
- 16 Apr 2012 3:45 UTC; 2 points) 's comment on ‘Thinking, Fast and Slow’ Chapter Summaries / Notes [link] by (
- 24 Oct 2010 18:19 UTC; 2 points) 's comment on That which can be destroyed by the truth should *not* necessarily be by (
- You don’t need Kant by (1 Apr 2009 18:09 UTC; 2 points)
- 5 Dec 2011 18:02 UTC; 2 points) 's comment on How is your mind different from everyone else’s? by (
- 17 Nov 2014 11:31 UTC; 2 points) 's comment on Intentionally Raising the Sanity Waterline by (
- 14 Dec 2012 13:54 UTC; 2 points) 's comment on By Which It May Be Judged by (
- Request for rough draft review: Navigating Identityspace by 29 Sep 2010 17:51 UTC; 1 point) (
- Meetup : Meetup #7 - Becoming Less Wrong by 21 Nov 2016 16:50 UTC; 1 point) (
- 19 Aug 2022 20:56 UTC; 1 point) 's comment on David Udell’s Shortform by (
- The Crux of Understanding Written Text—Text reading and inference by 20 Feb 2021 22:20 UTC; 1 point) (
- 20 Apr 2022 23:00 UTC; 1 point) 's comment on A very quick analogy regarding “opinions” by (
- 3 Sep 2014 18:37 UTC; 1 point) 's comment on Bayesianism for humans: prosaic priors by (
- 4 Feb 2012 2:11 UTC; 1 point) 's comment on Terminal Bias by (
- 25 Sep 2012 22:52 UTC; 1 point) 's comment on High School Lecture—Report by (
- Meetup : West LA Meetup 09-20-2011 by 16 Sep 2011 21:29 UTC; 1 point) (
- 26 Mar 2012 14:32 UTC; 1 point) 's comment on Not all signalling/status behaviors are bad by (
- 18 Nov 2014 5:18 UTC; 1 point) 's comment on Intentionally Raising the Sanity Waterline by (
- Meetup : Meetup #6 - Still Amsterdam! by 8 Nov 2016 18:02 UTC; 1 point) (
- 8 Mar 2012 0:52 UTC; 1 point) 's comment on Friendly AI Society by (
- 21 Apr 2011 16:26 UTC; 1 point) 's comment on [SEQ RERUN] The Martial Art of Rationality by (
- 2 Sep 2011 4:33 UTC; 0 points) 's comment on . by (
- 5 Jul 2012 14:14 UTC; 0 points) 's comment on Rationality Quotes July 2012 by (
- 1 Dec 2011 18:46 UTC; 0 points) 's comment on Tidbit: “Semantic over-achievers” by (
- 9 Feb 2010 18:17 UTC; 0 points) 's comment on Epistemic Luck by (
- 8 Dec 2012 4:46 UTC; 0 points) 's comment on Is Equality Really about Diminishing Marginal Utility? by (
- 12 Feb 2016 20:13 UTC; 0 points) 's comment on Open thread, Feb. 01 - Feb. 07, 2016 by (
- 12 Apr 2012 22:34 UTC; 0 points) 's comment on SotW: Be Specific by (
- 7 Apr 2012 15:08 UTC; 0 points) 's comment on SotW: Be Specific by (
- 18 Nov 2014 5:20 UTC; 0 points) 's comment on Is this dark arts and if it, is it justified? by (
- 6 Dec 2014 23:41 UTC; 0 points) 's comment on Where is the line between being a good child and taking care of oneself? by (
- 9 Oct 2008 0:10 UTC; 0 points) 's comment on Shut up and do the impossible! by (
- 28 Oct 2011 1:34 UTC; 0 points) 's comment on 5 Second Level: Substituting the Question by (
- 10 Dec 2012 0:27 UTC; 0 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- 16 Aug 2014 4:02 UTC; -1 points) 's comment on Rationality Quotes July 2014 by (
- Uploads are Impossible by 12 May 2023 8:03 UTC; -5 points) (
- 18 Apr 2009 4:46 UTC; -8 points) 's comment on Tell Your Rationalist Origin Story by (
overhawl overhaul
Two words: Stockholm Syndrome
Re-reading this, it isn’t clear what you’re responding to. For future readers, you’re explaining why “death gives meaning to life” is a cached thought.
Indeed, I was wondering about that. For more clarity: it’s a reply to bw’s collapsed comment. It’s not nested since this article was moved from overcomingbias to here, and overcomingbias didn’t have nested comments. You’ll see that a lot in the sequences.
Right.
But the problem was to keep going on, breathing and even sort of thinking in the presence of death in this world.
Thousands generations of our ancestors had to adopt to death in some way, without any chance to strike back at it at all.
It isn’t your usual “hostage situation” as they go...
bw: Could you please at least provide a citation or reference for us ignorant fools who don’t understand how death gives meaning to life?
I’ll have to agree with the diagnosis of Stockholm Syndrome.
“Death gives rise to meaning” can have many interpretations. One of the most common is that death makes life finite. Each of us only have so many movements in our life, so we should look to get the most out of life by living a meaningful life.
Conversely, if you had an infinity, it would not matter much what you did, because you could do everything, which would mean that nothing really matters all that much.
Yeah, and if you haven’t spent months or, better, years studying astrology you’re in no position to discuss that either, especially dismissively. And if you haven’t been personally abducted by aliens you have you shouldn’t… [etc] /sarcasm
If one believes to the best of their limited rationality that they have been abducted by aliens then the thing to do is not to jump all over them but to try and discover if, to the best of your rationality, what they say is true. If after examination it isn’t true then you can do what you will, but if to the best of your knowledge what they say holds up then it would be one of the greatest discoveries in recent history. You would certainly need to get more people to also check the claim as there are so many (presumed) bogus claims around.
If we had infinite resources, then yes, you would be right. But our resources are limited and so in order to investigate anything at all, we must also decide to not investigate certain claims. So unfortunately we must dismiss many claims out of hand, without making the slightest effort to investigate them, not because we are dogmatic but because the finiteness of resources forces us to choose. On the bright side, there are many people on the planet, and so you can probably find at least a few who will lend you their finite resources. If it turns out that there is good evidence that a person really was abducted by aliens, then the few who had initially been willing to entertain and investigate the claim will probably, evidence in hand, be able to find a slightly larger audience, which in turn will find a still larger audience, and so on until it comes to wide notice.
Surely if only the greatest thinkers thought it, everyone else who holds it to be true has it cached?
Statistical proofs of things don’t necessarily work if the world is controlled by an intelligent entity.
Yes they do. If the world is controlled by an intelligent entity, then statistical proofs tell you about the behaviour of that entity, rather than impersonal laws of physics, but they still tell you what’s likely to happen.
Yes, however, it is conceivable that the intelligent entity is sufficiently complicated that no amount of evidence gathered within the universe could allow us to uniquely identify its nature. This is of course implausible based on prior probability, solomonov induction, etc., though.
“Can you imagine having to program using 100Hz CPUs, no matter how many of them you had?”
No, it would be very difficult. But one thing I’m wondering is what’s the instruction set of the neuron? I’m probably taking the analogy too far. Is it more advanced then add/sub/mult/div ?
Yes.
The question is not whether it is a cached tought but whether it is a good thought. And what I claim is that it is both good and extremely difficult to understand precisely because of our natutal bias to avoid death. As for references, I suppose it is a central thought in continental philosophy since Hegel: Heidegger and Jonas but you can find it elsewhere, even as far away from existentialism as Mayr or Maturana.
It would be far more instructive (imo) to describe how you conceive of death in this way, rather than merely stating that some Very Smart People have.
Note that this person has not posted on this website since October 12th, 2007, and if I am not mistaken these posts may in fact be imported from another website or earlier version of this website.
So how much of the brains advanced nature comes from slower processors with better instruction sets and how much comes from network effects (both spatial and temporal)?
FWIW, As far as caching goes, I’ve noticed cache failures many, many times in my life. Mostly when I’m doing something that’s 99% routine but for some reason I should be changing that last 1% and forget to. For example, if I’m supposed to run an errand on the way home, it’s not uncommon for me to forget the errand. I leave work, think the goal is home and pull the set route from my brain. In fact driving home is so rote that I often don’t remember all the details of the drive. It’s not uncommon to hit home and then realize I need to go to the grocery store that was on the way. I often think of “brain farts” like that as either a problem of my really broad decision tree (I often maximize a choice for the local branch I’m in and maximize across the whole tree). Hmm I smell gas, Hmm I need light, I know I’ll light a match! Or as collisions in my memory hash table, like the aforementioned picking my normal driving routine rather than deviating when I was supposed to.
“Death gives meaning to life” reminds me of this:
http://tvtropes.org/pmwiki/pmwiki.php/Main.WhoWantsToLiveForever
1.”It’s a good guess that the actual majority of human cognition consists of cache lookups.
This thought does tend to go through my mind at certain times.”
A funny joke...as if the idea expressed in the first sentence may itself have been cached.
2.”Raised by silent robots” is a catchy phrase. Did you make it up?
“As I have pointed out elsewhere, this is simply false as probability theory.”
I find this phrasing misleading. “False as X” can mean the same thing “as false as X.”
Another example of a false idea that seems to apply to the cache idea is that there is a material, mechanical explaination for everything. By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind. When it comes to religion I’m not sure that any amount of evidence will convince a true believer.
I think this assumed dichotomy of material/mechanical vs non- is itself a cached thought. I do assume everything can be explained; but whatever mechanism of explanation I use can, if you feel like it, be called “material” or “mechanical”… therefore what you really mean to say is ‘another example of a false idea is that everything can be explained’.
“Operations of thought are like cavalry charges in battle – they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.” —Whitehead
By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind.
Sorry, I don’t know what you’re talking about.
Nick, the explaination of cached thoughts assumes the mind and the brain are the same thing. I’m suggesting that thought comes from an earlier misunderstanding of what science says about the nature of reality. I read someone unhesitatingly repeating a meme and thought. Sorry for not being clearer.
Is caching the best mental model of how these jillions of “100hz processors” operate?
An alternate: lossy decompression. Rather like, for instance, how dna information is expressed during an individual’s life. (And, one cannot help but suspect, at a much larger scale than that of the lives of individuals.)
A reason to prefer “lossy compression” over “caching”: “Caching” leads one to believe that the information is cached without loss. And, one tends to look around to find where the uncompressed bits can be stored.
But, I’ll admit I’ve failed to put together the pieces of a general intelligence machine using a lossy compression model. So maybe it’s a bogus model, too.
Has anyone built the equivalent of a Turing machine using processor count and/or replicated input data as the cheap resource rather than time?
That is, what could a machine that does everything in one step do in the way of useful work? With or without restrictions on how many replications of the input data there are going in and where the output might come out?
OK, OK. “Dude, what are you smoking?”, right? :)
Felix: Yes, for example see http://en.wikipedia.org/wiki/NC_%28complexity%29
By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind. When did this physics breakthrough happen? Or are you referring to this?
tggp The breakthrough I’m refering to started in 1900 with Max Planck. This ended in around 1930 with what is now called quantum physics. If you go to the Nobel prize web site and read Max Born’s acceptance sppech you’ll get a good flavor of this. Also Henry Stapp has written numerous papers along these lines.
You seem to shy away from the obvious conclusion of your otherwise excellent post. Our slow brains are entirely unable to do much reasoning from first principles, therefore we ought to pay strict attention to such received ideas that have stood the test of time, and have been culturally cached. “Love isn’t rational” strikes me as an excellent example. If our rationality is bounded, as it certainly is, then it is often rational to not try to think things out from first principles, but accept the evolved memes of the surrounding culture. You may have undermined the whole purpose of this website. Perhaps our biases are important, perhaps they are reliable guides, and overcoming them leaves us with nothing to go on but our slow and falliable reason.
Note that this is not a mere academic argument. The political left has often been prone to the idea that they could throw off the shackles of social convention and replace them with something more rational. This has seldom worked out well. Politically, you’ve made an excellent argument for the very arational Burkean conservate point of view.
I don’t think most of us would agree that everyone out there is playing human rational capacity to the hilt and needs to slow down on attacking its biases and prejudices. After all, the modern critical examination of human biases, while touched upon throughout history, is essentially a century old or less.
Where do you think ancient wisdom comes from, mtraven? From still more ancient wisdom? I’ve tried to rethink a few things myself, and though I’ve gone astray from time to time, I wouldn’t have it any other way. Not for anything in the world. Sometimes you need a stronger weapon than your ancestors have forged, you see. Tsuyoku naritai!
Are “cached thoughts” and “habits” similar?
Eliezer—there is one additional input to surviving ancient wisdom that goes beyond the thought that the ancients put into it, and that is the simple fact of its survival. Even if people came up with an idea for bad reasons, that idea may nevertheless be a good one and may survive on that account. If it survives, then it may be a good idea even though nobody knows why, and even though nobody ever knew why.
I make no recommendation on this basis, I simply point out that there can be more to ancient wisdom than what ancient minds put into it, and an attempt to re-compute, even if it re-captures the original computation, does not necessarily recap the process of selection. (Obviously we would want to distinguish between parasitic and symbiotic memes—the survival of an idea may be, but is not necessarily, a result of its benefit to us.)
Douglas, this is difficult because you appear to prefer to allude to your position rather than state it.
Quantum mechanics, at least according to some ways of interpreting it, does indeed say that some events don’t have any explanation beyond “that’s the way it happened to go”. So far, so good; but what does that have to do with whether the mind and the brain are the same thing? (Actually, I think physicalists would generally say not “the mind and the brain are the same thing” but something more like “the mind is something the brain does” or “the mind is a set of patterns in what the brain does”.)
I suppose the “Copenhagen interpretation” of QM makes conscious observation responsible for wavefunction collapse; but, speaking of “earlier misunderstandings of what science says about the nature of reality”, it might be worth mentioning that AFAIK just about all physicists these days prefer other ways of looking at QM that don’t have that feature.
But I may very well be misunderstanding you or missing the point in some other way. Let’s be more specific. You say that Eliezer’s post reveals that he’s working with bad cached ideas, and that they’re shown to be bad by quantum mechanics. So could you please give a specific example of something Eliezer said that is incompatible with quantum mechanics?
g- I’m saying that the need to explain your thinking by means of brain processes assumes something about the situation that may not be true. I’m not saying that such a research project is doomed to failure, or violates the laws of physics, just that it is not the only explaination that would agree with what has been discovered in physics. I would further say that when the physicists overcame the idea that there must be a material,mechanical explaination for all the phenomena they were studying we got the most validated scientific theory in history. Sometimes when I see all the difficulties that occur in both neuroscience and philosophy around this issue I think that another approach might be more appropriate. Otherwise I think the post by Eliezer makes some good points—which is why I tried applying it to the post itself.
Quantum mechanics did not result from overcoming the idea that there must be a material, mechanical explanation for all the phenomena physicists study.
What about quantum mechanics gives us any reason to think that there’s anything wrong with Eliezer’s commitment to understanding minds in terms of brains?
And could you give a specific example of a difficulty in neuroscience or philosophy that results from a commitment to understanding minds in terms of brains? (I find it easier to think of ones that come from a commitment to not understanding minds in terms of brains.)
g- quantum physics came about because of the recognition that classical physics is wrong. The problem that Max Planck solved by introducing the quantum was “How could any object in this universe exist without that object emitting so much energy that everything would be instantly vaporized?” Not a small problem. The recognition that there is no material, mechanical explaination of all phenomena was important to the development of the new science and the new scientific view of the universe. The revolution was completed (in terms of the experimental evidence) with the Aspect experiments and Bell’s theorem. The revolution in terms of cached thought is continuing. There is nothing wrong with Eliezer’s commitment, a commitment I respect. The idea that science demands material, mechanical explainations is the cached thought that I was pointing out as not true. I believe that would be a valid example of what the post is about. Specific difficulties in neuroscience and philosophy include: The one mentioned by Eliezer in his post, the binding problem( there is no place in your brain where what you experience comes together the way you experience it), the self (there is no “I” in the brain), the experience of conscious will as being causully efficacious, the observed instances of fully functional human beings who have little or no brain, memory (no memory banks in the brain)...
Yes, QM came about because of the recognition that classical physics is wrong. (I would take issue with some details of your one-sentence summary, but it doesn’t matter.) But then you leap from there to “the recognition that there is no material, mechanical explanation of all phenomena”, which is something entirely different.
Bell’s inequality and Aspect’s experiments demonstrating its violation don’t say that there is no material, mechanical explanation of all phenomena. They place limits on what sorts of material, mechanical explanation there might be.
I have no idea how you can (1) say that there is nothing wrong with Eliezer’s commitment to understanding minds in terms of brains, having (0) described his application here of that commitment as “someone unhesitatingly repeating a meme” and, unless I misunderstood your opening comment, characterized his position as that of an unconvinceable “true believer”.
Could you please explain either what grounds you have for thinking that Eliezer thinks that “science demands material, mechanical explanations” or else what grounds you have for thinking that QM shows this to be wrong? (I’m fairly sure that one of those is silly, but which one depends on what you mean by “material, mechanical explanations”.)
It seems to me that abandoning the search for material, mechanical explanations makes the problems you list less problematic only if what you actually do is to abandon the search for explanations. I’ve never seen the least hint of a non-material-mechanical explanation for any of them that actually explains anything.
That story was published in Fine Homebuilding a few years ago.
I happen to know because...I was the idiot who tore down a chimney from a bottom on the advice of my neighbor, and after telling the story to some acquaintances at a timber framing class up in Vermont and being told “you should write that up and send it to Fine Homebuilding”, I did.
I’ve got a copy of the story hanging off my blog.
Anyway, love the blog. Keep up the good work.
This isn’t some theoretical limit of the human brain; it’s just what they’ve found from testing (or something they just made up, now that I think about it). Whoever they were testing was alive, and was taking full advantage of their soul.
But as you said, we can’t actually recompute everything. No time. So the exhortation to “think!” can’t possibly be followed in more than a small fraction of the cases.
The best we can do is to occasionally recompute certain items. And, if the re-computation is significantly at odds with the cached result, communicate this to others, who are likely to have the same cached result. We can do this in parallel. You can recompute a few things, I’ll recompute a few things, and thousands of others are meanwhile recomputing a few things. Occasionally someone may have a significantly different result, which he’ll hopefully communicate to others. The number of significantly different results will hopefully be only a small fraction of the number of recomputed results, which might bring the sharing of different results within the realm of possibility. For example, if there are 100 of us, and each recomputes 10 results, then collectively we recompute 1000 results (assuming no overlap). Only 1 out of every 100 recomputed results might be different from the cached result. So we only need to share among ourselves the 10 significantly different recomputed results. That is pretty easy, and we will in effect have done an overhaul of 1000 cached results, at the price of only 10 recomputations each and 10 received communications each (and one transmitted communication for each person who computed a new result). Seems doable.
What seems to be blatantly forgotten is that people believe themselves to be too busy for “meditation” (as in sitting down and thinking, not necessarily in a religious way) which is coincidentally exactly the process for “clearing the cache”. Because we run around all day working and consuming entertainment instead of sitting on a hill watching sheep eat grass meditation has simply lost it’s allure. It’s a sad statement really because with only 10 minutes of meditation you can free up 1-10 cached thoughts, which when practiced over the course of a year would result in up to 3,650 “cached thoughts” being revisited. Not as optimal as the above solution, but humans wouldn’t submit to that sort of computer-like efficiency anyway.
Two problems.
First, each of us has a different mind that produces a different thought cache, and most of us probably won’t be able to find much of a trunk build that we can agree on. To avoid conflicts, we’ll have to transition from the current monolithic architecture to a Unix-like modular architecture. But that will take years, because we’ll have to figure out who’s running what modules, and which modules each entry in the thought cache comes from. (You can’t count on lsmod to give complete or accurate results. I’d been running several unnamed modules for years before I found out they were a reimplementation of something called Singularitarianism.)
Second, how much data will we have to transfer (allowing for authentication, error correction, and Byzantine fault-tolerance), and are you sure anyone has enough input and output bandwidth?
I think you’re wrong as a question of fact, but I love the way you’ve expressed yourself.
It’s more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people’s thought caches.
Also, I don’t think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.
(Rationality as software development; it’s an interesting model, but perhaps we shouldn’t stretch the analogy too far)
g- when I use the word material I mean composed of matter. When I say matter I mean something that has mass and exists as a solid, liquid, or gas. When I say mechanical I mean explainable by causally determined material forces. As Feynman pointed out,” No one has found any machinery behind the law,” refering to modern physics. I’m sorry, the true believer comment was in response to the comment made by Eliezer about religion. I don’t know that it applies to Elizner himself—I don’t know him at all. The grounds I have for saying that science does not demand “material explainations” is the fact that the most experimentally tested, validated scientific theory we have does not posit them. I didn’t mean to attack Eliezer, but just the idea that there must be that type of explaination in order to be scientific. I think I was applying the post with the earlier comments, and I would do so from respect of the basic material covered. I would agree with you whole-heartedly that abandoning the search for explainations would be a huge mistake. I’m suggesting that by removing the unneeded assumption that all explainations are of a certain form science can advance in new ways. As David Bohm once said, “Progress in science is usually made by dropping assumptions.”
I really enjoyed your post! I would say we cache things we’ve reasoned out ourselves as well. Say you do a mathematical proof for the pythagorean theorum. At the end of the proof, you might feel you really understand the theory, but the next year, or next day even, you have completely forgotten the steps you used to do the proof. You might be able with great concentration extrapolate them again, but you still believe the theory without recalculating it from scratch. You remember being convinced in the past, and you trust your past self’s judgment. I think this is why it is so impossible to change many people’s minds on highly politicized matters. They remember having been truly convinced by such and such an argument of the correctness of one position, without remembering what exactly the argument was. The feeling of being convinced is what is so hard to forget. Since most people know they are unable to argue themselves, they trust that their inability to counter your points is their failing as an debater, and that if whoever convinced them of X were here, he would know what to say, because his arguments were so convincing. I wonder to what extent we depend upon conclusions we came to long ago and trust as our own today. I’ve personally found that I remember my conclusions a whole lot better than my reasoning or the evidence, and need effort to remind myself. But could we function if we didn’t use these conclusions? How much should we trust our past selves?
I don’t think I know of anyone who believes that everything is explicable in terms of causally things that have mass and exist as solid, liquid or gas, still less that everything must be. And I can’t imagine how anything in Eliezer’s original post suggests that he’s insisting on any such limitation.
Neither can I see how this has anything to do with QM (except, I guess, that some versions of QM give us a universe with randomness in it as well as determinism), or with Feynman’s comment about machinery. (The fundamental laws known at any time are by definition laws that no one has found any machinery behind. This was just as true of Newton’s laws in 1700 as of QM in 2000.)
Thinking in text...
Change your mode of cache usage. The brain has two conflicting tendencies here, which I’ll name “contagion” and “cull”. The contagion tendency is the way that related mental objects prime each other. The cull tendency is the way that a firm decision suppresses valid alternates. Your motto should be “first contagion, then never quite cull”. If you cull first, that’s “jumping to conclusions”. If you contagion but don’t cull, that’s called “woolgathering” and “being a ditherer”. But if you can hold down a partial cull, you’ll have alternates primed, and you can mentally turn on a dime. So by focusing first on contagion, you can push alternates above a threshold where they can’t be culled.
Also, intentionally pump in alternates by using De Bono’s “po”.
Also, set a mental tripwire on the feeling of “preaching”, stop and contagion. Reciting cliches always has that feeling that it ought to be followed by the refrain “amen”.
g- The cache thought I’m recognzing as false is that science demands material explainations. When I hear the mind described as the brain, that thought is activated in my thinking. Material, mechanistic = scientific. I don’t know what is in your mind or Elizer’s. I’m trying to deactivate the thought in my mind. Isn’t that the point of the post?
Douglas, you appear to have shifted your ground: originally you said “the explanation of cached thoughts assumes the mind and the brain are the same thing” and “I read someone unhesitatingly repeating a meme and thought”, but now you say it’s only your own cached thoughts that you’re concerned about.
I still have no idea why you think that QM makes any difference to how much science “demands material explanations”; with the definition of “material” that you gave it never did, and with any definition of “material” broad enough to impact the idea that “the mind is the brain” QM doesn’t make science any less thoroughly concerned with “material” things.
By all means flush the “material explanations only” thought out of your mental cache. But if you’re replacing it with some idea that QM has done away with science’s commitment to material explanations, then I bet that is a cached thought too, and I think it’s a wrong one.
In 1998, I wrote a rec.arts.int-fiction post called “Believable stupidity” (http://groups.google.com/group/rec.arts.int-fiction/ browse_thread/thread/60a077934f89a291/ 3fffb9048965857d?lnk=gst&q=believable+stupidity#3fffb9048965857d) split across 3 lines; rejoin for link)
saying that Eliza, a computer program that matches patterns, and fills in a template to produce a response, always wins the Loebner competition because template matching is more like what people do than reasoning is.
Herb Simon’s cognitive psych lectures at Carnegie Mellon always started with this same observation of how slow neurons are. He emphasized how bad we are at reasoning logically and how good we are at associative tasks. His and Allen Newell’s work on AI in the early 1960s led to the SOAR project, which models thinking as a big production system that caches effective sequences of inferential steps for later re-use. Simon also used to say that it took about 10 years to accumulate a large enough cache to be considered an expert in something.
Re-reading this, “10 years” resulted in a cache hit in my mind for ’10,000 hours’.
Sounds consistent with Jeff Hawkins’s memory prediction framework
So is everything else. That’s the problem with it.
Strangely, I have a cached thought of, “That’s bullshit.” This pings almost everything I hear said by people in a particular verbal/non-verbal pattern. For some reason, when someone says something in a manner that matches this verbal/non-verbal pattern I think, “That’s bullshit.” It doesn’t even matter what they are saying. It fires and afterwards I think about it and wonder if it really is bogus.
If someone tells me that love isn’t rational it is very likely that their communication style is going to ping, “That’s bullshit.” Adding a contrarian viewpoint to everything seems to help prevent me from inserting new cached thoughts. It doesn’t, however, help me find currently cached thoughts. Also, I have learned to internalize the response because telling everyone they were wrong wasn’t helping my social life.
This seems to be a cached thought in reaction to a partly physical event. Does this fall under the label cached thought?
thomblake recently posted a comment that has a great antidote for cached thoughts. Reverse the claim and see if it (a) triggers another cached thought or (b) seems as likely given cursory examination.
Well, I won’t. I will be thinking, “Bullshit!”
PS) What are the naughty language expectations here?
Random discussion points related to this behavior:
How does something like Wikipedia relate to cached thoughts?
How do you find cached thoughts in yourself?
How many cached thoughts are hanging around simply to provide excuses for stupid or selfish behavior? Does anyone actually believe love is irrational, or do they merely belief in their belief that love is irrational?
I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
As far as love being irrational, there actually is some evidence for that.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
No, that makes sense to me. You have essentially no information about whether a statement is more likely to be true or false at that percentage range.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
Pssh. Always learning something new, I guess.
Entire ways of acting and reacting—even mini-facets of personalities, tones of voice, turns of phrase—are also cached. These don’t have to be from someone else—they can be from the “you” of 10 years ago (which may have been a composite of your role models at the time). They are otherwise known as habits that you haven’t updated or re-evaluated for a long time.
The mini-pattern of action worked (or made sense) when you were 7, and it’s so second-nature that it hasn’t even entered your conscious awareness since then to give you a chance to reassess it.
“Well, maybe the human species doesn’t deserve to survive.”
does not imply or approximate
“I want to end the human race person by person.”
and your implication that it does is incredibly stupid.
Please, explain how the human race could fail to survive without each of its members dying.
Humanity can survive without deserving to, and someone may prefer that state of affairs even given that judgement. Also, someone can believe that it doesn’t deserve to but not care to be the instrument of justice in that case.
I consider those relatively low-probability interpretations when someone’s talking about humanity deserving not to survive, though.
I never said that, nor implied it. You’re completely misinterpreting what I said.
Consider the difference between these two scenarios:
a) There’s a family of 10 people, who I normatively have decided do not deserve to live. I, over the course of the next 40 years, kill them person by person, using an instant and physically painless method, one by one, one ever 4 years.
b) There’s a family of 10 people who I normatively have decided do not deserve to live. I wait 40 years, and kill them all at once, using an instant and physically painless method.
Answer me this: are they the same thing?
The same end result, yes, but not the same process, and the amount of suffering in process a) is far greater, would you agree?
Actually, assuming that the people in the family are relatively normal and want to live and want each other to live, and assuming that they don’t know about your plans before you start enacting them, I’d expect the suffering to be significantly higher in situation A, since the family members experience more time mourning and probably considerable time worrying about being murdered.
I’m not actually sure how these scenarios are relevant, though.
Exactly my point. [mixed up a) and b) in the last question].
A bad thing about a person’s death is the negative externality imposed on those who mourn them dying.
So to equate someone not wanting to kill their child [the equivalent of scenario a), killing a person with people around to mourn them] with someone deciding that the human race, as a whole, deserves to die [which is the equivalent of scenario b)], or to say that this person is a hypocrite, is totally idiotic.
If in the original essay it said it would be hypocritical of someone to say that the human race deserves to die while being unwilling to push the button which instantly ended all human life, then it would make sense.
Why the downvotes on the original reply? Are people so thin-skinned that they can’t take their arguments being called stupid, or are they so ignorant that they bury an argument they don’t agree with?
No, glutamate. Your original comment was rude and uninteresting. “Stupid” isn’t an informative criticism (not even if you specify that the stupidity is “incredible”), and it signals contempt and disrespect besides. Uninformative criticisms that signal that attitude are not readily welcomed here.
You could have said—if I interpret your view correctly, which I may or may not—something like:
That, and it’s pretty standard around here to assume that the human species dying off is bad even if it happens in such a way that nobody knows it’s happening or happened—it’s not actually about suffering, in other words.
The vocabulary someone uses in an attack on an argument shouldn’t be limited by the degree to which the language might offend someone. Or should it?
To be explicit: I am not calling him stupid! Only someone intelligent could write an article like this, that’s obvious, and I agree with the rest of it.
And yes, that’s a superior phrasing of my argument. I should have been more descriptive in the original post, that’s my fault. Do you agree with it?
This is an ongoing controversy, but if you can be inoffensive without sacrificing too many other virtues, it seems best to go for it.
That’s good to know. It wasn’t at all clear—any of it! - from your original comment.
I would agree with a weak, purely descriptive form of my restatement.
If I’m a member of the family, I prefer (a), because it gives us nine opportunities to identify you, track you down, and kill you before you kill us all.
Who needs a meme or a cached thought to ask. Can you ask one question at a time or make less than one statement without so many words?
Interesting article, but I’m not so sure about the “cache” analogy. A typical cache in computer science has two major differences with the effect you’re pointing to :
A cache stores the result of a computation. Result of a complex algorithm, of a database of external server query, of disk read, … but the computation is done once and then the result is stored for later used. Very few cache in computer science are caching results that comes from elsewhere but that were not computed at least once. While in your case, it’s not “I did once the complex job of thinking about love and rationality, I concluded love is not rational, so I cached that computation, and later on I reuse it” but “I heard that love is not rational, I didn’t do the computation, but still I stored the result”.
As a consequence of 1., a cached result in computer science is (almost) never wrong. It may be obsoleted (an old version of the Internet page), but not wrong (that old version was the correct one when you fetched it). In the cases described by the article, the “cached thought” are wrong values stored in the cache, not just obsoleted values.
What you refer to sounds more like a cache poisoning attack than the normal operation of a caching system.
I don’t know how to rephrase the “cached thoughts” expression into something more accurate but still as potent as an expression, so I’ll stick with your “cached thoughts” for now, but I’m uncomfortable with it because of those two differences.
indeed.
if we decouple the cost of caching into “was true but is false” and “was never true”, it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the “cached thought” problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i’ve personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear “C++”, i’ll (try to) think both “not worth it, too unsafe and errorprone” and “so worth it, speed and libraries”. whenever i don’t have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache “contradictory twins” together.
I believe Schopenhauer came to the same conclusion.
“Reading is merely a surrogate for thinking for yourself; it means letting someone else direct your thoughts. Many books, moreover, serve merely to show how many ways there are of being wrong, and how far astray you yourself would go if you followed their guidance. You should read only when your own thoughts dry up, which will of course happen frequently enough even to the best heads; but to banish your own thoughts so as to take up a book is a sin against the holy ghost; it is like deserting untrammeled nature to look at a herbarium or engravings of landscapes.”
However, I don’t think I would go so far as to say that you should think until you run out of thoughts. Finding the right balance seems to be an art we are continually improving on—adapting to each situation.
Your exhortation to think reminds me of an experience I made while drifting in the summer of 1987, basically describable as : Ignore the clear intention you recognize to the written prose bytes and various signs that society frames and promotes to view, and look instead for the best sense you can find to their shortest form that is different from the obvious one, on a case by case basis, while assuming a source that has access to your intimacy, like a friend who is subject to aphasia and can’t express himself clearly but may have acutely intelligent ideas that relate to you. Or, if you want, while imagining yourself in a Matrix-like, simulated reality, with remote hackers trying to pass you useful messages through indirect, constrained means. A limiting feature of that experience was its driving with bitter surprises to the observation that society frames and promotes much news involving the death of people, that you’d normally ignore with bliss.
“One neuropsychologist estimates that visual perception is 90 percent memory, less than 10 percent sensory [nerve signals].” Apparently, we even use cached thought to see. We’re really biased, huh?
src?
An example of a cached thought reported by Alex Blumberg in This American Life, episode 293: “A Little Bit of Knowledge.”
Many more examples in this episode.
From experience I find that the appeal to nature fallacy dominates cached thoughts manifesting itself mainly into conservatism. For example when I broached the topics of life extension with my mother.
From A New Kind of Science by Stephen Wolfram, page 0621:
Does that mean there are other usable tricks?
I was curious so I looked up the reasoning (and original paper) behind the hundred-step rule.
“Connectionist Models and Their Properties” (http://csjarchive.cogsci.rpi.edu/1982v06/i03/p0205p0254/MAIN.PDF)
“Neurons whose basic computational speed is a few milliseconds must be made to account for complex behaviors which are carried out in a few hundred milliseconds (Posner, 1978). This means that entire complex behaviors are carried out in less than a hundred time steps.”
How do you consider interpretation of the cache? For example,
“Death gives rise to meaning” can be interpreted in many different ways, as some can see it as inspiring while others as meaningless, confusing or untrue.
Eliezer to me seems to be against the “death gives life meaning” cache from what I am able to predict so far since he seems to support cryonics,transhumanism etc