Cached Thoughts
One of the single greatest puzzles about the human brain is how the damn thing works at all when most neurons fire 10–20 times per second, or 200Hz tops. In neurology, the “hundred-step rule” is that any postulated operation has to complete in at most 100 sequential steps—you can be as parallel as you like, but you can’t postulate more than 100 (preferably fewer) neural spikes one after the other.
Can you imagine having to program using 100Hz CPUs, no matter how many of them you had? You’d also need a hundred billion processors just to get anything done in realtime.
If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. And it’s a very neural idiom—recognition, association, completing the pattern.
It’s a good guess that the actual majority of human cognition consists of cache lookups.
This thought does tend to go through my mind at certain times.
There was a wonderfully illustrative story which I thought I had bookmarked, but couldn’t re-find: it was the story of a man whose know-it-all neighbor had once claimed in passing that the best way to remove a chimney from your house was to knock out the fireplace, wait for the bricks to drop down one level, knock out those bricks, and repeat until the chimney was gone. Years later, when the man wanted to remove his own chimney, this cached thought was lurking, waiting to pounce . . .
As the man noted afterward—you can guess it didn’t go well—his neighbor was not particularly knowledgeable in these matters, not a trusted source. If he’d questioned the idea, he probably would have realized it was a poor one. Some cache hits we’d be better off recomputing. But the brain completes the pattern automatically—and if you don’t consciously realize the pattern needs correction, you’ll be left with a completed pattern.
I suspect that if the thought had occurred to the man himself—if he’d personally had this bright idea for how to remove a chimney—he would have examined the idea more critically. But if someone else has already thought an idea through, you can save on computing power by caching their conclusion—right?
In modern civilization particularly, no one can think fast enough to think their own thoughts. If I’d been abandoned in the woods as an infant, raised by wolves or silent robots, I would scarcely be recognizable as human. No one can think fast enough to recapitulate the wisdom of a hunter-gatherer tribe in one lifetime, starting from scratch. As for the wisdom of a literate civilization, forget it.
But the flip side of this is that I continually see people who aspire to critical thinking, repeating back cached thoughts which were not invented by critical thinkers.
A good example is the skeptic who concedes, “Well, you can’t prove or disprove a religion by factual evidence.” As I have pointed out elsewhere,1 this is simply false as probability theory. And it is also simply false relative to the real psychology of religion—a few centuries ago, saying this would have gotten you burned at the stake. A mother whose daughter has cancer prays, “God, please heal my daughter,” not, “Dear God, I know that religions are not allowed to have any falsifiable consequences, which means that you can’t possibly heal my daughter, so . . . well, basically, I’m praying to make myself feel better, instead of doing something that could actually help my daughter.”
But people read “You can’t prove or disprove a religion by factual evidence,” and then, the next time they see a piece of evidence disproving a religion, their brain completes the pattern. Even some atheists repeat this absurdity without hesitation. If they’d thought of the idea themselves, rather than hearing it from someone else, they would have been more skeptical.
Death. Complete the pattern: “Death gives meaning to life.”
It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.
What patterns are being completed, inside your mind, that you never chose to be there?
Rationality. Complete the pattern: “Love isn’t rational.”
If this idea had suddenly occurred to you personally, as an entirely new thought, how would you examine it critically? I know what I would say, but what would you? It can be hard to see with fresh eyes. Try to keep your mind from completing the pattern in the standard, unsurprising, already-known way. It may be that there is no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in the answer automatically.
Now that you’ve read this, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!
1See ’Religion’s Claim to be Non-Disprovable,” in Map and Territory.
- Learned Blankness by Apr 18, 2011, 6:55 PM; 260 points) (
- Raising the Sanity Waterline by Mar 12, 2009, 4:28 AM; 241 points) (
- Cached Selves by Mar 22, 2009, 7:34 PM; 216 points) (
- Crisis of Faith by Oct 10, 2008, 10:08 PM; 179 points) (
- Philosophical Landmines by Feb 8, 2013, 9:22 PM; 165 points) (
- Einstein’s Superpowers by May 30, 2008, 6:40 AM; 120 points) (
- How to Seem (and Be) Deep by Oct 14, 2007, 6:13 PM; 117 points) (
- Circumventing interpretability: How to defeat mind-readers by Jul 14, 2022, 4:59 PM; 114 points) (
- A summary of every “Highlights from the Sequences” post by Jul 15, 2022, 11:01 PM; 97 points) (
- Fake Reductionism by Mar 17, 2008, 10:49 PM; 96 points) (
- The Simple Math of Everything by Nov 17, 2007, 10:42 PM; 94 points) (
- The “Outside the Box” Box by Oct 12, 2007, 10:50 PM; 94 points) (
- Statistical Prediction Rules Out-Perform Expert Human Judgments by Jan 18, 2011, 3:19 AM; 92 points) (
- Replace the Symbol with the Substance by Feb 16, 2008, 6:12 PM; 91 points) (
- Subagents, neural Turing machines, thought selection, and blindspots by Aug 6, 2019, 9:15 PM; 87 points) (
- Use curiosity by Feb 25, 2011, 10:23 PM; 85 points) (
- EA: A More Powerful Future Than Expected? by Apr 15, 2022, 7:00 PM; 82 points) (EA Forum;
- Church vs. Taskforce by Mar 28, 2009, 9:23 AM; 82 points) (
- On the construction of the self by May 29, 2020, 1:04 PM; 77 points) (
- What Would You Do Without Morality? by Jun 29, 2008, 5:07 AM; 77 points) (
- Where Physics Meets Experience by Apr 25, 2008, 4:58 AM; 76 points) (
- MrBeast’s Squid Game Tricked Me by Dec 3, 2022, 5:50 AM; 75 points) (
- My Naturalistic Awakening by Sep 25, 2008, 6:58 AM; 74 points) (
- 11 core rationalist skills by Dec 2, 2009, 8:09 AM; 72 points) (
- My story / owning one’s reasons by Jan 7, 2011, 12:17 AM; 70 points) (
- Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by Nov 5, 2011, 11:06 AM; 69 points) (
- Rationality Lessons Learned from Irrational Adventures in Romance by Oct 4, 2011, 2:45 AM; 69 points) (
- Prolegomena to a Theory of Fun by Dec 17, 2008, 11:33 PM; 68 points) (
- Raised in Technophilia by Sep 17, 2008, 2:06 AM; 68 points) (
- A List of Nuances by Nov 10, 2014, 5:02 AM; 67 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by Jul 23, 2022, 10:17 PM; 65 points) (
- Bayesian updating in real life is mostly about understanding your hypotheses by Jan 1, 2024, 12:10 AM; 63 points) (
- From self to craving (three characteristics series) by May 22, 2020, 12:16 PM; 63 points) (
- Is Humanism A Religion-Substitute? by Mar 26, 2008, 4:18 AM; 62 points) (
- Timeless Identity by Jun 3, 2008, 8:16 AM; 61 points) (
- The Point of Easy Progress by Mar 28, 2021, 4:38 PM; 59 points) (
- SotW: Check Consequentialism by Mar 29, 2012, 1:35 AM; 58 points) (
- Heat vs. Motion by Apr 1, 2008, 3:55 AM; 53 points) (
- Leveling Up in Rationality: A Personal Journey by Jan 17, 2012, 11:02 AM; 51 points) (
- Building Weirdtopia by Jan 12, 2009, 8:35 PM; 50 points) (
- How Much Thought by Apr 12, 2009, 4:56 AM; 49 points) (
- Understanding vipassana meditation by Oct 3, 2010, 6:12 PM; 48 points) (
- A summary of every “Highlights from the Sequences” post by Jul 15, 2022, 11:05 PM; 47 points) (EA Forum;
- The End (of Sequences) by Apr 27, 2009, 9:07 PM; 46 points) (
- Logical or Connectionist AI? by Nov 17, 2008, 8:03 AM; 46 points) (
- Let Your Mind Be Not Fixed by Jul 31, 2020, 5:54 PM; 46 points) (
- Cached Procrastination by Apr 25, 2009, 4:22 PM; 43 points) (
- False Laughter by Dec 22, 2007, 6:03 AM; 43 points) (
- What data generated that thought? by Apr 26, 2011, 12:54 PM; 42 points) (
- New Post version 1 (please read this ONLY if your last name beings with a–k) by Jul 27, 2011, 9:57 PM; 40 points) (
- Not Taking Over the World by Dec 15, 2008, 10:18 PM; 40 points) (
- Fundamental Doubts by Jul 12, 2008, 5:21 AM; 38 points) (
- Jul 2, 2012, 9:53 PM; 37 points) 's comment on Rationality Quotes July 2012 by (
- Lighthaven Sequences Reading Group #4 (Tuesday 10/01) by Sep 25, 2024, 5:48 AM; 36 points) (
- Being Foreign and Being Sane by May 25, 2013, 12:58 AM; 35 points) (
- Hard Takeoff by Dec 2, 2008, 8:44 PM; 35 points) (
- Basics of Handling Disagreements with People by Nov 12, 2024, 5:55 PM; 34 points) (
- Mar 8, 2022, 1:54 PM; 34 points) 's comment on March 2022 Welcome & Open Thread by (
- Coding Rationally—Test Driven Development by Oct 1, 2010, 3:20 PM; 33 points) (
- In Praise of Maximizing – With Some Caveats by Mar 15, 2015, 7:40 PM; 32 points) (
- “I know I’m biased, but...” by May 10, 2011, 8:03 PM; 32 points) (
- Mar 15, 2008, 4:33 PM; 31 points) 's comment on Qualitatively Confused by (
- When did Eliezer Yudkowsky change his mind about neural networks? by Nov 14, 2023, 9:24 PM; 31 points) (
- May 9, 2012, 5:44 PM; 31 points) 's comment on Neil deGrasse Tyson on Cryonics by (
- How to enjoy being wrong by Jul 27, 2011, 5:48 AM; 30 points) (
- Some of the best rationality essays by Oct 19, 2021, 10:57 PM; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by Sep 5, 2022, 2:07 PM; 29 points) (
- On Platitudes by Apr 22, 2020, 5:55 AM; 28 points) (
- A Premature Word on AI by May 31, 2008, 5:48 PM; 27 points) (
- (Moral) Truth in Fiction? by Feb 9, 2009, 5:26 PM; 25 points) (
- Sep 28, 2020, 5:13 AM; 23 points) 's comment on Blog posts as epistemic trust builders by (
- Rationality of sometimes missing the point of the stated question, and of certain type of defensive reasoning by Dec 29, 2011, 1:09 PM; 22 points) (
- Dialectical Bootstrapping by Mar 13, 2009, 5:10 PM; 22 points) (
- Being Wrong Doesn’t Mean You’re Stupid and Bad (Probably) by Jun 29, 2019, 11:58 PM; 20 points) (
- Apr 1, 2015, 1:07 PM; 19 points) 's comment on How has lesswrong changed your life? by (
- Doxa, Episteme, and Gnosis Revisited by Nov 20, 2019, 7:35 PM; 19 points) (
- Apr 18, 2019, 2:16 AM; 19 points) 's comment on Where to Draw the Boundaries? by (
- Basics of Handling Disagreements with People by Nov 12, 2024, 5:55 PM; 18 points) (EA Forum;
- That Crisis thing seems pretty useful by Apr 10, 2009, 5:10 PM; 18 points) (
- Nov 11, 2013, 11:48 AM; 18 points) 's comment on On learning difficult things by (
- Hardware is already ready for the singularity. Algorithm knowledge is the only barrier. by Mar 30, 2021, 10:48 PM; 17 points) (
- Lighthaven Sequences Reading Group #18 (Tuesday 01/21) by Jan 17, 2025, 2:49 AM; 17 points) (
- Apr 10, 2009, 2:38 PM; 15 points) 's comment on Beware of Other-Optimizing by (
- Detachment vs attachment [AI risk and mental health] by Jan 15, 2024, 12:41 AM; 14 points) (
- Caching on Success by Sep 4, 2019, 6:34 AM; 14 points) (
- Sep 22, 2019, 6:43 PM; 13 points) 's comment on The Zettelkasten Method by (
- Mar 1, 2012, 4:54 PM; 12 points) 's comment on Absolute denial for atheists by (
- Mar 13, 2009, 5:07 PM; 12 points) 's comment on So you say you’re an altruist... by (
- May 5, 2011, 6:00 PM; 12 points) 's comment on Your Evolved Intuitions by (
- Beware the suboptimal routine by Jan 10, 2024, 7:02 PM; 12 points) (
- [SEQ RERUN] Cached Thoughts by Sep 24, 2011, 3:24 AM; 12 points) (
- May 23, 2022, 10:19 AM; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- Apr 3, 2023, 6:21 PM; 11 points) 's comment on The “Outside the Box” Box by (
- Book review and policy discussion: diversity and complexity by Apr 17, 2021, 11:36 AM; 11 points) (
- Lighthaven Sequences Reading Group #12 (Tuesday 11/26) by Nov 20, 2024, 4:44 AM; 11 points) (
- Looking before leaping by Apr 30, 2024, 3:46 AM; 10 points) (EA Forum;
- Ikaxas’ Hammertime Final Exam by May 1, 2018, 3:30 AM; 10 points) (
- Write to Think by Jan 12, 2023, 12:33 AM; 10 points) (
- The Mind Is Not Designed For Thinking by Mar 26, 2009, 9:57 PM; 9 points) (
- Mar 2, 2018, 7:31 PM; 9 points) 's comment on Hazard’s Shortform Feed by (
- Sep 12, 2013, 2:36 PM; 9 points) 's comment on A concise version of “Twelve Virtues of Rationality”, with Anki deck by (
- May 12, 2011, 2:19 AM; 9 points) 's comment on Personal Benefits from Rationality by (
- Jun 19, 2010, 10:51 AM; 9 points) 's comment on Rationality quotes: June 2010 by (
- Detachment vs attachment [AI risk and mental health] by Jan 15, 2024, 12:38 AM; 8 points) (EA Forum;
- Rationality Reading Group: Part I: Seeing with Fresh Eyes by Sep 9, 2015, 11:40 PM; 8 points) (
- Apr 20, 2022, 10:57 PM; 8 points) 's comment on A very quick analogy regarding “opinions” by (
- Apr 2, 2013, 6:32 PM; 8 points) 's comment on Explaining vs. Explaining Away by (
- Agency and Life Domains by Nov 16, 2014, 1:38 AM; 8 points) (
- Feb 10, 2017, 6:40 AM; 7 points) 's comment on The Social Substrate by (
- May 4, 2023, 11:54 PM; 7 points) 's comment on Open & Welcome Thread—May 2023 by (
- Jan 25, 2011, 5:52 PM; 7 points) 's comment on Intrapersonal negotiation by (
- Jul 18, 2012, 1:33 PM; 6 points) 's comment on Eliezer apparently wrong about higgs boson by (
- Apr 17, 2015, 7:48 PM; 6 points) 's comment on Rationality Reading Group: Introduction and A: Predictably Wrong by (
- Flinches by Jan 3, 2022, 11:51 PM; 6 points) (
- Apr 1, 2009, 4:03 PM; 6 points) 's comment on Proverbs and Cached Judgments: the Rolling Stone by (
- Dec 16, 2010, 7:01 PM; 6 points) 's comment on Expansion of “Cached thought” wiki entry by (
- Feb 8, 2010, 7:48 PM; 5 points) 's comment on Epistemic Luck by (
- Dec 18, 2013, 12:16 AM; 5 points) 's comment on Open thread for December 17-23, 2013 by (
- Aug 31, 2024, 2:18 AM; 5 points) 's comment on … Wait, our models of semantics should inform fluid mechanics?!? by (
- Jan 30, 2011, 2:21 AM; 5 points) 's comment on “Manna” by Marshall Brain by (
- Oct 18, 2011, 9:58 PM; 5 points) 's comment on How to understand people better by (
- May 17, 2011, 5:20 PM; 5 points) 's comment on Rationality Boot Camp by (
- Apr 14, 2021, 10:11 PM; 5 points) 's comment on Auctioning Off the Top Slot in Your Reading List by (
- Mar 20, 2013, 7:22 PM; 4 points) 's comment on Don’t Get Offended by (
- Meetup : Mountain View sequences discussion by May 18, 2012, 7:33 PM; 4 points) (
- Sep 29, 2010, 7:07 PM; 4 points) 's comment on Request for rough draft review: Navigating Identityspace by (
- Meetup : Madison: Reading Group, Seeing with Fresh Eyes by Sep 12, 2012, 2:56 AM; 4 points) (
- Feb 23, 2013, 11:18 AM; 4 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- Jan 4, 2011, 12:45 AM; 4 points) 's comment on Kicking Akrasia: Now or Never by (
- Surface Thoughts Suck by Nov 2, 2020, 1:24 PM; 4 points) (
- “Irrationality in Argument” by Dec 17, 2010, 5:46 AM; 4 points) (
- Dec 9, 2012, 10:39 PM; 4 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- Dec 19, 2024, 6:13 AM; 3 points) 's comment on Karthik Tadepalli’s Quick takes by (EA Forum;
- Mar 8, 2013, 10:30 PM; 3 points) 's comment on Don’t Get Offended by (
- Sep 27, 2011, 11:57 PM; 3 points) 's comment on Stanislav Petrov Day by (
- Feb 27, 2013, 10:32 PM; 3 points) 's comment on Need some psychology advice by (
- May 22, 2012, 10:33 PM; 3 points) 's comment on [SEQ RERUN] Class Project by (
- Apr 18, 2012, 1:57 PM; 3 points) 's comment on Avoiding Your Belief’s Real Weak Points by (
- Discuss: Original Seeing Practices by Oct 16, 2010, 2:20 AM; 3 points) (
- Sep 19, 2010, 5:27 PM; 3 points) 's comment on Church vs. Taskforce by (
- May 4, 2011, 5:42 PM; 2 points) 's comment on Rationality Quotes: May 2011 by (
- Apr 17, 2009, 12:03 PM; 2 points) 's comment on Tell Your Rationalist Origin Story by (
- Oct 27, 2014, 8:40 PM; 2 points) 's comment on Open thread, Oct. 27 - Nov. 2, 2014 by (
- Apr 16, 2012, 3:45 AM; 2 points) 's comment on ‘Thinking, Fast and Slow’ Chapter Summaries / Notes [link] by (
- Oct 24, 2010, 6:19 PM; 2 points) 's comment on That which can be destroyed by the truth should *not* necessarily be by (
- You don’t need Kant by (Apr 1, 2009, 6:09 PM; 2 points)
- Dec 5, 2011, 6:02 PM; 2 points) 's comment on How is your mind different from everyone else’s? by (
- Nov 17, 2014, 11:31 AM; 2 points) 's comment on Intentionally Raising the Sanity Waterline by (
- Dec 14, 2012, 1:54 PM; 2 points) 's comment on By Which It May Be Judged by (
- Nov 23, 2012, 5:40 AM; 1 point) 's comment on Torture vs. Dust Specks by (
- Request for rough draft review: Navigating Identityspace by Sep 29, 2010, 5:51 PM; 1 point) (
- Meetup : Meetup #7 - Becoming Less Wrong by Nov 21, 2016, 4:50 PM; 1 point) (
- Aug 19, 2022, 8:56 PM; 1 point) 's comment on David Udell’s Shortform by (
- The Crux of Understanding Written Text—Text reading and inference by Feb 20, 2021, 10:20 PM; 1 point) (
- Apr 20, 2022, 11:00 PM; 1 point) 's comment on A very quick analogy regarding “opinions” by (
- Sep 3, 2014, 6:37 PM; 1 point) 's comment on Bayesianism for humans: prosaic priors by (
- Feb 4, 2012, 2:11 AM; 1 point) 's comment on Terminal Bias by (
- Sep 25, 2012, 10:52 PM; 1 point) 's comment on High School Lecture—Report by (
- Jul 20, 2024, 1:09 AM; 1 point) 's comment on Friendship is transactional, unconditional friendship is insurance by (
- Meetup : West LA Meetup 09-20-2011 by Sep 16, 2011, 9:29 PM; 1 point) (
- Mar 26, 2012, 2:32 PM; 1 point) 's comment on Not all signalling/status behaviors are bad by (
- Nov 18, 2014, 5:18 AM; 1 point) 's comment on Intentionally Raising the Sanity Waterline by (
- Meetup : Meetup #6 - Still Amsterdam! by Nov 8, 2016, 6:02 PM; 1 point) (
- Mar 8, 2012, 12:52 AM; 1 point) 's comment on Friendly AI Society by (
- Apr 21, 2011, 4:26 PM; 1 point) 's comment on [SEQ RERUN] The Martial Art of Rationality by (
- Sep 2, 2011, 4:33 AM; 0 points) 's comment on . by (
- Jul 5, 2012, 2:14 PM; 0 points) 's comment on Rationality Quotes July 2012 by (
- Dec 1, 2011, 6:46 PM; 0 points) 's comment on Tidbit: “Semantic over-achievers” by (
- Feb 9, 2010, 6:17 PM; 0 points) 's comment on Epistemic Luck by (
- Dec 8, 2012, 4:46 AM; 0 points) 's comment on Is Equality Really about Diminishing Marginal Utility? by (
- Feb 12, 2016, 8:13 PM; 0 points) 's comment on Open thread, Feb. 01 - Feb. 07, 2016 by (
- Apr 12, 2012, 10:34 PM; 0 points) 's comment on SotW: Be Specific by (
- Apr 7, 2012, 3:08 PM; 0 points) 's comment on SotW: Be Specific by (
- Nov 18, 2014, 5:20 AM; 0 points) 's comment on Is this dark arts and if it, is it justified? by (
- Dec 6, 2014, 11:41 PM; 0 points) 's comment on Where is the line between being a good child and taking care of oneself? by (
- Oct 9, 2008, 12:10 AM; 0 points) 's comment on Shut up and do the impossible! by (
- Oct 28, 2011, 1:34 AM; 0 points) 's comment on 5 Second Level: Substituting the Question by (
- Dec 10, 2012, 12:27 AM; 0 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- Aug 16, 2014, 4:02 AM; -1 points) 's comment on Rationality Quotes July 2014 by (
- Uploads are Impossible by May 12, 2023, 8:03 AM; -5 points) (
- Apr 18, 2009, 4:46 AM; -8 points) 's comment on Tell Your Rationalist Origin Story by (
But as you said, we can’t actually recompute everything. No time. So the exhortation to “think!” can’t possibly be followed in more than a small fraction of the cases.
The best we can do is to occasionally recompute certain items. And, if the re-computation is significantly at odds with the cached result, communicate this to others, who are likely to have the same cached result. We can do this in parallel. You can recompute a few things, I’ll recompute a few things, and thousands of others are meanwhile recomputing a few things. Occasionally someone may have a significantly different result, which he’ll hopefully communicate to others. The number of significantly different results will hopefully be only a small fraction of the number of recomputed results, which might bring the sharing of different results within the realm of possibility. For example, if there are 100 of us, and each recomputes 10 results, then collectively we recompute 1000 results (assuming no overlap). Only 1 out of every 100 recomputed results might be different from the cached result. So we only need to share among ourselves the 10 significantly different recomputed results. That is pretty easy, and we will in effect have done an overhaul of 1000 cached results, at the price of only 10 recomputations each and 10 received communications each (and one transmitted communication for each person who computed a new result). Seems doable.
What seems to be blatantly forgotten is that people believe themselves to be too busy for “meditation” (as in sitting down and thinking, not necessarily in a religious way) which is coincidentally exactly the process for “clearing the cache”. Because we run around all day working and consuming entertainment instead of sitting on a hill watching sheep eat grass meditation has simply lost it’s allure. It’s a sad statement really because with only 10 minutes of meditation you can free up 1-10 cached thoughts, which when practiced over the course of a year would result in up to 3,650 “cached thoughts” being revisited. Not as optimal as the above solution, but humans wouldn’t submit to that sort of computer-like efficiency anyway.
Two problems.
First, each of us has a different mind that produces a different thought cache, and most of us probably won’t be able to find much of a trunk build that we can agree on. To avoid conflicts, we’ll have to transition from the current monolithic architecture to a Unix-like modular architecture. But that will take years, because we’ll have to figure out who’s running what modules, and which modules each entry in the thought cache comes from. (You can’t count on lsmod to give complete or accurate results. I’d been running several unnamed modules for years before I found out they were a reimplementation of something called Singularitarianism.)
Second, how much data will we have to transfer (allowing for authentication, error correction, and Byzantine fault-tolerance), and are you sure anyone has enough input and output bandwidth?
I think you’re wrong as a question of fact, but I love the way you’ve expressed yourself.
It’s more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people’s thought caches.
Also, I don’t think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.
(Rationality as software development; it’s an interesting model, but perhaps we shouldn’t stretch the analogy too far)
overhawl overhaul
Two words: Stockholm Syndrome
Re-reading this, it isn’t clear what you’re responding to. For future readers, you’re explaining why “death gives meaning to life” is a cached thought.
Indeed, I was wondering about that. For more clarity: it’s a reply to bw’s collapsed comment. It’s not nested since this article was moved from overcomingbias to here, and overcomingbias didn’t have nested comments. You’ll see that a lot in the sequences.
Right.
But the problem was to keep going on, breathing and even sort of thinking in the presence of death in this world.
Thousands generations of our ancestors had to adopt to death in some way, without any chance to strike back at it at all.
It isn’t your usual “hostage situation” as they go...
bw: Could you please at least provide a citation or reference for us ignorant fools who don’t understand how death gives meaning to life?
I’ll have to agree with the diagnosis of Stockholm Syndrome.
“Death gives rise to meaning” can have many interpretations. One of the most common is that death makes life finite. Each of us only have so many movements in our life, so we should look to get the most out of life by living a meaningful life.
Conversely, if you had an infinity, it would not matter much what you did, because you could do everything, which would mean that nothing really matters all that much.
Yeah, and if you haven’t spent months or, better, years studying astrology you’re in no position to discuss that either, especially dismissively. And if you haven’t been personally abducted by aliens you have you shouldn’t… [etc] /sarcasm
If one believes to the best of their limited rationality that they have been abducted by aliens then the thing to do is not to jump all over them but to try and discover if, to the best of your rationality, what they say is true. If after examination it isn’t true then you can do what you will, but if to the best of your knowledge what they say holds up then it would be one of the greatest discoveries in recent history. You would certainly need to get more people to also check the claim as there are so many (presumed) bogus claims around.
If we had infinite resources, then yes, you would be right. But our resources are limited and so in order to investigate anything at all, we must also decide to not investigate certain claims. So unfortunately we must dismiss many claims out of hand, without making the slightest effort to investigate them, not because we are dogmatic but because the finiteness of resources forces us to choose. On the bright side, there are many people on the planet, and so you can probably find at least a few who will lend you their finite resources. If it turns out that there is good evidence that a person really was abducted by aliens, then the few who had initially been willing to entertain and investigate the claim will probably, evidence in hand, be able to find a slightly larger audience, which in turn will find a still larger audience, and so on until it comes to wide notice.
Surely if only the greatest thinkers thought it, everyone else who holds it to be true has it cached?
Statistical proofs of things don’t necessarily work if the world is controlled by an intelligent entity.
Yes they do. If the world is controlled by an intelligent entity, then statistical proofs tell you about the behaviour of that entity, rather than impersonal laws of physics, but they still tell you what’s likely to happen.
Yes, however, it is conceivable that the intelligent entity is sufficiently complicated that no amount of evidence gathered within the universe could allow us to uniquely identify its nature. This is of course implausible based on prior probability, solomonov induction, etc., though.
“Can you imagine having to program using 100Hz CPUs, no matter how many of them you had?”
No, it would be very difficult. But one thing I’m wondering is what’s the instruction set of the neuron? I’m probably taking the analogy too far. Is it more advanced then add/sub/mult/div ?
Yes.
The question is not whether it is a cached tought but whether it is a good thought. And what I claim is that it is both good and extremely difficult to understand precisely because of our natutal bias to avoid death. As for references, I suppose it is a central thought in continental philosophy since Hegel: Heidegger and Jonas but you can find it elsewhere, even as far away from existentialism as Mayr or Maturana.
It would be far more instructive (imo) to describe how you conceive of death in this way, rather than merely stating that some Very Smart People have.
Note that this person has not posted on this website since October 12th, 2007, and if I am not mistaken these posts may in fact be imported from another website or earlier version of this website.
So how much of the brains advanced nature comes from slower processors with better instruction sets and how much comes from network effects (both spatial and temporal)?
FWIW, As far as caching goes, I’ve noticed cache failures many, many times in my life. Mostly when I’m doing something that’s 99% routine but for some reason I should be changing that last 1% and forget to. For example, if I’m supposed to run an errand on the way home, it’s not uncommon for me to forget the errand. I leave work, think the goal is home and pull the set route from my brain. In fact driving home is so rote that I often don’t remember all the details of the drive. It’s not uncommon to hit home and then realize I need to go to the grocery store that was on the way. I often think of “brain farts” like that as either a problem of my really broad decision tree (I often maximize a choice for the local branch I’m in and maximize across the whole tree). Hmm I smell gas, Hmm I need light, I know I’ll light a match! Or as collisions in my memory hash table, like the aforementioned picking my normal driving routine rather than deviating when I was supposed to.
“Death gives meaning to life” reminds me of this:
http://tvtropes.org/pmwiki/pmwiki.php/Main.WhoWantsToLiveForever
1.”It’s a good guess that the actual majority of human cognition consists of cache lookups.
This thought does tend to go through my mind at certain times.”
A funny joke...as if the idea expressed in the first sentence may itself have been cached.
2.”Raised by silent robots” is a catchy phrase. Did you make it up?
“As I have pointed out elsewhere, this is simply false as probability theory.”
I find this phrasing misleading. “False as X” can mean the same thing “as false as X.”
Another example of a false idea that seems to apply to the cache idea is that there is a material, mechanical explaination for everything. By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind. When it comes to religion I’m not sure that any amount of evidence will convince a true believer.
I think this assumed dichotomy of material/mechanical vs non- is itself a cached thought. I do assume everything can be explained; but whatever mechanism of explanation I use can, if you feel like it, be called “material” or “mechanical”… therefore what you really mean to say is ‘another example of a false idea is that everything can be explained’.
“Operations of thought are like cavalry charges in battle – they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.” —Whitehead
By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind.
Sorry, I don’t know what you’re talking about.
Nick, the explaination of cached thoughts assumes the mind and the brain are the same thing. I’m suggesting that thought comes from an earlier misunderstanding of what science says about the nature of reality. I read someone unhesitatingly repeating a meme and thought. Sorry for not being clearer.
Is caching the best mental model of how these jillions of “100hz processors” operate?
An alternate: lossy decompression. Rather like, for instance, how dna information is expressed during an individual’s life. (And, one cannot help but suspect, at a much larger scale than that of the lives of individuals.)
A reason to prefer “lossy compression” over “caching”: “Caching” leads one to believe that the information is cached without loss. And, one tends to look around to find where the uncompressed bits can be stored.
But, I’ll admit I’ve failed to put together the pieces of a general intelligence machine using a lossy compression model. So maybe it’s a bogus model, too.
Has anyone built the equivalent of a Turing machine using processor count and/or replicated input data as the cheap resource rather than time?
That is, what could a machine that does everything in one step do in the way of useful work? With or without restrictions on how many replications of the input data there are going in and where the output might come out?
OK, OK. “Dude, what are you smoking?”, right? :)
Felix: Yes, for example see http://en.wikipedia.org/wiki/NC_%28complexity%29
By the evidence from physics we can state with assuredness that there is no material, mechanical explaination for all phenomena, yet few seem to be able to accept the results of the most proven scientific theory in the history of mankind. When did this physics breakthrough happen? Or are you referring to this?
tggp The breakthrough I’m refering to started in 1900 with Max Planck. This ended in around 1930 with what is now called quantum physics. If you go to the Nobel prize web site and read Max Born’s acceptance sppech you’ll get a good flavor of this. Also Henry Stapp has written numerous papers along these lines.
You seem to shy away from the obvious conclusion of your otherwise excellent post. Our slow brains are entirely unable to do much reasoning from first principles, therefore we ought to pay strict attention to such received ideas that have stood the test of time, and have been culturally cached. “Love isn’t rational” strikes me as an excellent example. If our rationality is bounded, as it certainly is, then it is often rational to not try to think things out from first principles, but accept the evolved memes of the surrounding culture. You may have undermined the whole purpose of this website. Perhaps our biases are important, perhaps they are reliable guides, and overcoming them leaves us with nothing to go on but our slow and falliable reason.
Note that this is not a mere academic argument. The political left has often been prone to the idea that they could throw off the shackles of social convention and replace them with something more rational. This has seldom worked out well. Politically, you’ve made an excellent argument for the very arational Burkean conservate point of view.
I don’t think most of us would agree that everyone out there is playing human rational capacity to the hilt and needs to slow down on attacking its biases and prejudices. After all, the modern critical examination of human biases, while touched upon throughout history, is essentially a century old or less.
Where do you think ancient wisdom comes from, mtraven? From still more ancient wisdom? I’ve tried to rethink a few things myself, and though I’ve gone astray from time to time, I wouldn’t have it any other way. Not for anything in the world. Sometimes you need a stronger weapon than your ancestors have forged, you see. Tsuyoku naritai!
Are “cached thoughts” and “habits” similar?
Eliezer—there is one additional input to surviving ancient wisdom that goes beyond the thought that the ancients put into it, and that is the simple fact of its survival. Even if people came up with an idea for bad reasons, that idea may nevertheless be a good one and may survive on that account. If it survives, then it may be a good idea even though nobody knows why, and even though nobody ever knew why.
I make no recommendation on this basis, I simply point out that there can be more to ancient wisdom than what ancient minds put into it, and an attempt to re-compute, even if it re-captures the original computation, does not necessarily recap the process of selection. (Obviously we would want to distinguish between parasitic and symbiotic memes—the survival of an idea may be, but is not necessarily, a result of its benefit to us.)
Douglas, this is difficult because you appear to prefer to allude to your position rather than state it.
Quantum mechanics, at least according to some ways of interpreting it, does indeed say that some events don’t have any explanation beyond “that’s the way it happened to go”. So far, so good; but what does that have to do with whether the mind and the brain are the same thing? (Actually, I think physicalists would generally say not “the mind and the brain are the same thing” but something more like “the mind is something the brain does” or “the mind is a set of patterns in what the brain does”.)
I suppose the “Copenhagen interpretation” of QM makes conscious observation responsible for wavefunction collapse; but, speaking of “earlier misunderstandings of what science says about the nature of reality”, it might be worth mentioning that AFAIK just about all physicists these days prefer other ways of looking at QM that don’t have that feature.
But I may very well be misunderstanding you or missing the point in some other way. Let’s be more specific. You say that Eliezer’s post reveals that he’s working with bad cached ideas, and that they’re shown to be bad by quantum mechanics. So could you please give a specific example of something Eliezer said that is incompatible with quantum mechanics?
g- I’m saying that the need to explain your thinking by means of brain processes assumes something about the situation that may not be true. I’m not saying that such a research project is doomed to failure, or violates the laws of physics, just that it is not the only explaination that would agree with what has been discovered in physics. I would further say that when the physicists overcame the idea that there must be a material,mechanical explaination for all the phenomena they were studying we got the most validated scientific theory in history. Sometimes when I see all the difficulties that occur in both neuroscience and philosophy around this issue I think that another approach might be more appropriate. Otherwise I think the post by Eliezer makes some good points—which is why I tried applying it to the post itself.
Quantum mechanics did not result from overcoming the idea that there must be a material, mechanical explanation for all the phenomena physicists study.
What about quantum mechanics gives us any reason to think that there’s anything wrong with Eliezer’s commitment to understanding minds in terms of brains?
And could you give a specific example of a difficulty in neuroscience or philosophy that results from a commitment to understanding minds in terms of brains? (I find it easier to think of ones that come from a commitment to not understanding minds in terms of brains.)
g- quantum physics came about because of the recognition that classical physics is wrong. The problem that Max Planck solved by introducing the quantum was “How could any object in this universe exist without that object emitting so much energy that everything would be instantly vaporized?” Not a small problem. The recognition that there is no material, mechanical explaination of all phenomena was important to the development of the new science and the new scientific view of the universe. The revolution was completed (in terms of the experimental evidence) with the Aspect experiments and Bell’s theorem. The revolution in terms of cached thought is continuing. There is nothing wrong with Eliezer’s commitment, a commitment I respect. The idea that science demands material, mechanical explainations is the cached thought that I was pointing out as not true. I believe that would be a valid example of what the post is about. Specific difficulties in neuroscience and philosophy include: The one mentioned by Eliezer in his post, the binding problem( there is no place in your brain where what you experience comes together the way you experience it), the self (there is no “I” in the brain), the experience of conscious will as being causully efficacious, the observed instances of fully functional human beings who have little or no brain, memory (no memory banks in the brain)...
Yes, QM came about because of the recognition that classical physics is wrong. (I would take issue with some details of your one-sentence summary, but it doesn’t matter.) But then you leap from there to “the recognition that there is no material, mechanical explanation of all phenomena”, which is something entirely different.
Bell’s inequality and Aspect’s experiments demonstrating its violation don’t say that there is no material, mechanical explanation of all phenomena. They place limits on what sorts of material, mechanical explanation there might be.
I have no idea how you can (1) say that there is nothing wrong with Eliezer’s commitment to understanding minds in terms of brains, having (0) described his application here of that commitment as “someone unhesitatingly repeating a meme” and, unless I misunderstood your opening comment, characterized his position as that of an unconvinceable “true believer”.
Could you please explain either what grounds you have for thinking that Eliezer thinks that “science demands material, mechanical explanations” or else what grounds you have for thinking that QM shows this to be wrong? (I’m fairly sure that one of those is silly, but which one depends on what you mean by “material, mechanical explanations”.)
It seems to me that abandoning the search for material, mechanical explanations makes the problems you list less problematic only if what you actually do is to abandon the search for explanations. I’ve never seen the least hint of a non-material-mechanical explanation for any of them that actually explains anything.
That story was published in Fine Homebuilding a few years ago.
I happen to know because...I was the idiot who tore down a chimney from a bottom on the advice of my neighbor, and after telling the story to some acquaintances at a timber framing class up in Vermont and being told “you should write that up and send it to Fine Homebuilding”, I did.
I’ve got a copy of the story hanging off my blog.
Anyway, love the blog. Keep up the good work.
This isn’t some theoretical limit of the human brain; it’s just what they’ve found from testing (or something they just made up, now that I think about it). Whoever they were testing was alive, and was taking full advantage of their soul.
g- when I use the word material I mean composed of matter. When I say matter I mean something that has mass and exists as a solid, liquid, or gas. When I say mechanical I mean explainable by causally determined material forces. As Feynman pointed out,” No one has found any machinery behind the law,” refering to modern physics. I’m sorry, the true believer comment was in response to the comment made by Eliezer about religion. I don’t know that it applies to Elizner himself—I don’t know him at all. The grounds I have for saying that science does not demand “material explainations” is the fact that the most experimentally tested, validated scientific theory we have does not posit them. I didn’t mean to attack Eliezer, but just the idea that there must be that type of explaination in order to be scientific. I think I was applying the post with the earlier comments, and I would do so from respect of the basic material covered. I would agree with you whole-heartedly that abandoning the search for explainations would be a huge mistake. I’m suggesting that by removing the unneeded assumption that all explainations are of a certain form science can advance in new ways. As David Bohm once said, “Progress in science is usually made by dropping assumptions.”
I really enjoyed your post! I would say we cache things we’ve reasoned out ourselves as well. Say you do a mathematical proof for the pythagorean theorum. At the end of the proof, you might feel you really understand the theory, but the next year, or next day even, you have completely forgotten the steps you used to do the proof. You might be able with great concentration extrapolate them again, but you still believe the theory without recalculating it from scratch. You remember being convinced in the past, and you trust your past self’s judgment. I think this is why it is so impossible to change many people’s minds on highly politicized matters. They remember having been truly convinced by such and such an argument of the correctness of one position, without remembering what exactly the argument was. The feeling of being convinced is what is so hard to forget. Since most people know they are unable to argue themselves, they trust that their inability to counter your points is their failing as an debater, and that if whoever convinced them of X were here, he would know what to say, because his arguments were so convincing. I wonder to what extent we depend upon conclusions we came to long ago and trust as our own today. I’ve personally found that I remember my conclusions a whole lot better than my reasoning or the evidence, and need effort to remind myself. But could we function if we didn’t use these conclusions? How much should we trust our past selves?
I don’t think I know of anyone who believes that everything is explicable in terms of causally things that have mass and exist as solid, liquid or gas, still less that everything must be. And I can’t imagine how anything in Eliezer’s original post suggests that he’s insisting on any such limitation.
Neither can I see how this has anything to do with QM (except, I guess, that some versions of QM give us a universe with randomness in it as well as determinism), or with Feynman’s comment about machinery. (The fundamental laws known at any time are by definition laws that no one has found any machinery behind. This was just as true of Newton’s laws in 1700 as of QM in 2000.)
Thinking in text...
Change your mode of cache usage. The brain has two conflicting tendencies here, which I’ll name “contagion” and “cull”. The contagion tendency is the way that related mental objects prime each other. The cull tendency is the way that a firm decision suppresses valid alternates. Your motto should be “first contagion, then never quite cull”. If you cull first, that’s “jumping to conclusions”. If you contagion but don’t cull, that’s called “woolgathering” and “being a ditherer”. But if you can hold down a partial cull, you’ll have alternates primed, and you can mentally turn on a dime. So by focusing first on contagion, you can push alternates above a threshold where they can’t be culled.
Also, intentionally pump in alternates by using De Bono’s “po”.
Also, set a mental tripwire on the feeling of “preaching”, stop and contagion. Reciting cliches always has that feeling that it ought to be followed by the refrain “amen”.
g- The cache thought I’m recognzing as false is that science demands material explainations. When I hear the mind described as the brain, that thought is activated in my thinking. Material, mechanistic = scientific. I don’t know what is in your mind or Elizer’s. I’m trying to deactivate the thought in my mind. Isn’t that the point of the post?
Douglas, you appear to have shifted your ground: originally you said “the explanation of cached thoughts assumes the mind and the brain are the same thing” and “I read someone unhesitatingly repeating a meme and thought”, but now you say it’s only your own cached thoughts that you’re concerned about.
I still have no idea why you think that QM makes any difference to how much science “demands material explanations”; with the definition of “material” that you gave it never did, and with any definition of “material” broad enough to impact the idea that “the mind is the brain” QM doesn’t make science any less thoroughly concerned with “material” things.
By all means flush the “material explanations only” thought out of your mental cache. But if you’re replacing it with some idea that QM has done away with science’s commitment to material explanations, then I bet that is a cached thought too, and I think it’s a wrong one.
In 1998, I wrote a rec.arts.int-fiction post called “Believable stupidity” (http://groups.google.com/group/rec.arts.int-fiction/ browse_thread/thread/60a077934f89a291/ 3fffb9048965857d?lnk=gst&q=believable+stupidity#3fffb9048965857d) split across 3 lines; rejoin for link)
saying that Eliza, a computer program that matches patterns, and fills in a template to produce a response, always wins the Loebner competition because template matching is more like what people do than reasoning is.
Herb Simon’s cognitive psych lectures at Carnegie Mellon always started with this same observation of how slow neurons are. He emphasized how bad we are at reasoning logically and how good we are at associative tasks. His and Allen Newell’s work on AI in the early 1960s led to the SOAR project, which models thinking as a big production system that caches effective sequences of inferential steps for later re-use. Simon also used to say that it took about 10 years to accumulate a large enough cache to be considered an expert in something.
Re-reading this, “10 years” resulted in a cache hit in my mind for ’10,000 hours’.
Sounds consistent with Jeff Hawkins’s memory prediction framework
So is everything else. That’s the problem with it.
Strangely, I have a cached thought of, “That’s bullshit.” This pings almost everything I hear said by people in a particular verbal/non-verbal pattern. For some reason, when someone says something in a manner that matches this verbal/non-verbal pattern I think, “That’s bullshit.” It doesn’t even matter what they are saying. It fires and afterwards I think about it and wonder if it really is bogus.
If someone tells me that love isn’t rational it is very likely that their communication style is going to ping, “That’s bullshit.” Adding a contrarian viewpoint to everything seems to help prevent me from inserting new cached thoughts. It doesn’t, however, help me find currently cached thoughts. Also, I have learned to internalize the response because telling everyone they were wrong wasn’t helping my social life.
This seems to be a cached thought in reaction to a partly physical event. Does this fall under the label cached thought?
thomblake recently posted a comment that has a great antidote for cached thoughts. Reverse the claim and see if it (a) triggers another cached thought or (b) seems as likely given cursory examination.
Well, I won’t. I will be thinking, “Bullshit!”
PS) What are the naughty language expectations here?
Random discussion points related to this behavior:
How does something like Wikipedia relate to cached thoughts?
How do you find cached thoughts in yourself?
How many cached thoughts are hanging around simply to provide excuses for stupid or selfish behavior? Does anyone actually believe love is irrational, or do they merely belief in their belief that love is irrational?
I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
As far as love being irrational, there actually is some evidence for that.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
No, that makes sense to me. You have essentially no information about whether a statement is more likely to be true or false at that percentage range.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
Pssh. Always learning something new, I guess.
Entire ways of acting and reacting—even mini-facets of personalities, tones of voice, turns of phrase—are also cached. These don’t have to be from someone else—they can be from the “you” of 10 years ago (which may have been a composite of your role models at the time). They are otherwise known as habits that you haven’t updated or re-evaluated for a long time.
The mini-pattern of action worked (or made sense) when you were 7, and it’s so second-nature that it hasn’t even entered your conscious awareness since then to give you a chance to reassess it.
“Well, maybe the human species doesn’t deserve to survive.”
does not imply or approximate
“I want to end the human race person by person.”
and your implication that it does is incredibly stupid.
Please, explain how the human race could fail to survive without each of its members dying.
Humanity can survive without deserving to, and someone may prefer that state of affairs even given that judgement. Also, someone can believe that it doesn’t deserve to but not care to be the instrument of justice in that case.
I consider those relatively low-probability interpretations when someone’s talking about humanity deserving not to survive, though.
I never said that, nor implied it. You’re completely misinterpreting what I said.
Consider the difference between these two scenarios:
a) There’s a family of 10 people, who I normatively have decided do not deserve to live. I, over the course of the next 40 years, kill them person by person, using an instant and physically painless method, one by one, one ever 4 years.
b) There’s a family of 10 people who I normatively have decided do not deserve to live. I wait 40 years, and kill them all at once, using an instant and physically painless method.
Answer me this: are they the same thing?
The same end result, yes, but not the same process, and the amount of suffering in process a) is far greater, would you agree?
Actually, assuming that the people in the family are relatively normal and want to live and want each other to live, and assuming that they don’t know about your plans before you start enacting them, I’d expect the suffering to be significantly higher in situation A, since the family members experience more time mourning and probably considerable time worrying about being murdered.
I’m not actually sure how these scenarios are relevant, though.
Exactly my point. [mixed up a) and b) in the last question].
A bad thing about a person’s death is the negative externality imposed on those who mourn them dying.
So to equate someone not wanting to kill their child [the equivalent of scenario a), killing a person with people around to mourn them] with someone deciding that the human race, as a whole, deserves to die [which is the equivalent of scenario b)], or to say that this person is a hypocrite, is totally idiotic.
If in the original essay it said it would be hypocritical of someone to say that the human race deserves to die while being unwilling to push the button which instantly ended all human life, then it would make sense.
Why the downvotes on the original reply? Are people so thin-skinned that they can’t take their arguments being called stupid, or are they so ignorant that they bury an argument they don’t agree with?
No, glutamate. Your original comment was rude and uninteresting. “Stupid” isn’t an informative criticism (not even if you specify that the stupidity is “incredible”), and it signals contempt and disrespect besides. Uninformative criticisms that signal that attitude are not readily welcomed here.
You could have said—if I interpret your view correctly, which I may or may not—something like:
That, and it’s pretty standard around here to assume that the human species dying off is bad even if it happens in such a way that nobody knows it’s happening or happened—it’s not actually about suffering, in other words.
The vocabulary someone uses in an attack on an argument shouldn’t be limited by the degree to which the language might offend someone. Or should it?
To be explicit: I am not calling him stupid! Only someone intelligent could write an article like this, that’s obvious, and I agree with the rest of it.
And yes, that’s a superior phrasing of my argument. I should have been more descriptive in the original post, that’s my fault. Do you agree with it?
This is an ongoing controversy, but if you can be inoffensive without sacrificing too many other virtues, it seems best to go for it.
That’s good to know. It wasn’t at all clear—any of it! - from your original comment.
I would agree with a weak, purely descriptive form of my restatement.
If I’m a member of the family, I prefer (a), because it gives us nine opportunities to identify you, track you down, and kill you before you kill us all.
Who needs a meme or a cached thought to ask. Can you ask one question at a time or make less than one statement without so many words?
Interesting article, but I’m not so sure about the “cache” analogy. A typical cache in computer science has two major differences with the effect you’re pointing to :
A cache stores the result of a computation. Result of a complex algorithm, of a database of external server query, of disk read, … but the computation is done once and then the result is stored for later used. Very few cache in computer science are caching results that comes from elsewhere but that were not computed at least once. While in your case, it’s not “I did once the complex job of thinking about love and rationality, I concluded love is not rational, so I cached that computation, and later on I reuse it” but “I heard that love is not rational, I didn’t do the computation, but still I stored the result”.
As a consequence of 1., a cached result in computer science is (almost) never wrong. It may be obsoleted (an old version of the Internet page), but not wrong (that old version was the correct one when you fetched it). In the cases described by the article, the “cached thought” are wrong values stored in the cache, not just obsoleted values.
What you refer to sounds more like a cache poisoning attack than the normal operation of a caching system.
I don’t know how to rephrase the “cached thoughts” expression into something more accurate but still as potent as an expression, so I’ll stick with your “cached thoughts” for now, but I’m uncomfortable with it because of those two differences.
indeed.
if we decouple the cost of caching into “was true but is false” and “was never true”, it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the “cached thought” problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i’ve personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear “C++”, i’ll (try to) think both “not worth it, too unsafe and errorprone” and “so worth it, speed and libraries”. whenever i don’t have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache “contradictory twins” together.
I believe Schopenhauer came to the same conclusion.
“Reading is merely a surrogate for thinking for yourself; it means letting someone else direct your thoughts. Many books, moreover, serve merely to show how many ways there are of being wrong, and how far astray you yourself would go if you followed their guidance. You should read only when your own thoughts dry up, which will of course happen frequently enough even to the best heads; but to banish your own thoughts so as to take up a book is a sin against the holy ghost; it is like deserting untrammeled nature to look at a herbarium or engravings of landscapes.”
However, I don’t think I would go so far as to say that you should think until you run out of thoughts. Finding the right balance seems to be an art we are continually improving on—adapting to each situation.
Your exhortation to think reminds me of an experience I made while drifting in the summer of 1987, basically describable as : Ignore the clear intention you recognize to the written prose bytes and various signs that society frames and promotes to view, and look instead for the best sense you can find to their shortest form that is different from the obvious one, on a case by case basis, while assuming a source that has access to your intimacy, like a friend who is subject to aphasia and can’t express himself clearly but may have acutely intelligent ideas that relate to you. Or, if you want, while imagining yourself in a Matrix-like, simulated reality, with remote hackers trying to pass you useful messages through indirect, constrained means. A limiting feature of that experience was its driving with bitter surprises to the observation that society frames and promotes much news involving the death of people, that you’d normally ignore with bliss.
“One neuropsychologist estimates that visual perception is 90 percent memory, less than 10 percent sensory [nerve signals].” Apparently, we even use cached thought to see. We’re really biased, huh?
src?
An example of a cached thought reported by Alex Blumberg in This American Life, episode 293: “A Little Bit of Knowledge.”
Many more examples in this episode.
From experience I find that the appeal to nature fallacy dominates cached thoughts manifesting itself mainly into conservatism. For example when I broached the topics of life extension with my mother.
From A New Kind of Science by Stephen Wolfram, page 0621:
Does that mean there are other usable tricks?
I was curious so I looked up the reasoning (and original paper) behind the hundred-step rule.
“Connectionist Models and Their Properties” (http://csjarchive.cogsci.rpi.edu/1982v06/i03/p0205p0254/MAIN.PDF)
“Neurons whose basic computational speed is a few milliseconds must be made to account for complex behaviors which are carried out in a few hundred milliseconds (Posner, 1978). This means that entire complex behaviors are carried out in less than a hundred time steps.”
How do you consider interpretation of the cache? For example,
“Death gives rise to meaning” can be interpreted in many different ways, as some can see it as inspiring while others as meaningless, confusing or untrue.
Eliezer to me seems to be against the “death gives life meaning” cache from what I am able to predict so far since he seems to support cryonics,transhumanism etc