Dissolving the Question
“If a tree falls in the forest, but no one hears it, does it make a sound?”
I didn’t answer that question. I didn’t pick a position, “Yes!” or “No!”, and defend it. Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network. At the end, I hope, there was no question left—not even the feeling of a question.
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Like, say, “Do we have free will?”
The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude: “Yes, we must have free will,” or “No, we cannot possibly have free will.”
Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places. So they try to define very precisely what they mean by “free will”, and then ask again, “Do we have free will? Yes or no?”
A philosopher wiser yet, may suspect that the confusion about “free will” shows the notion itself is flawed. So they pursue the Traditional Rationalist course: They argue that “free will” is inherently self-contradictory, or meaningless because it has no testable consequences. And then they publish these devastating observations in a prestigious philosophy journal.
But proving that you are confused may not make you feel any less confused. Proving that a question is meaningless may not help you any more than answering it.
The philosopher’s instinct is to find the most defensible position, publish it, and move on. But the “naive” view, the instinctive view, is a fact about human psychology. You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science: If free will doesn’t exist, what goes on inside the head of a human being who thinks it does? This is not a rhetorical question!
It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.
You could look at the Standard Dispute over “If a tree falls in the forest, and no one hears it, does it make a sound?”, and you could do the Traditional Rationalist thing: Observe that the two don’t disagree on any point of anticipated experience, and triumphantly declare the argument pointless. That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?
The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers. So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?
The cognitive algorithms we use, are the way the world feels. And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks. There can be things in the mind that cut skew to the world.
For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world. This dangling unit is often useful as a shortcut in computation, which is why we have them. (Metaphorically speaking. Human neurobiology is surely far more complex.)
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you’re left wondering: “But does the falling tree really make a sound, or not?”
But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayes—then you’re done. Then there’s no lingering feeling of confusion, no vague sense of dissatisfaction.
If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind.
A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you’ve left anything unexplained.
And so, perhaps, you’ll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.
Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, “But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words.” That’s arguing that or explaining why a confusion exists. Now look at the neural network structure in Feel the Meaning. That’s explaining how, disassembling the confusion into smaller pieces which are not themselves confusing. See the difference?
Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion. Indeed, it is an entirely different art. Bear this in mind, and you should feel less embarrassed to say, “I know that what you say can’t possibly be true, and I can prove it. But I cannot write out a flowchart which shows how your brain makes the mistake, so I’m not done yet, and will continue investigating.”
I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early. If you keep asking questions, you’ll get to your destination eventually. If you decide too early that you’ve found an answer, you won’t.
The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there’s someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don’t know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...
But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you’re done.
So be warned that you may believe you’re done, when all you have is a mere triumphant refutation of a mistake.
But when you’re really done, you’ll know you’re done. Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again. Those who dream do not know they dream, but when you wake you know you are awake.
Which is to say: When you’re done, you’ll know you’re done, but unfortunately the reverse implication does not hold.
So here’s your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about “free will”?
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in “free will”, not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
This is one of the first real challenges I tried as an aspiring rationalist, once upon a time. One of the easier conundrums, relatively speaking. May it serve you likewise.
- Diseased thinking: dissolving questions about disease by 30 May 2010 21:16 UTC; 529 points) (
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- Privileging the Question by 29 Apr 2013 18:30 UTC; 227 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Thomas Kwa’s MIRI research experience by 2 Oct 2023 16:42 UTC; 171 points) (
- Less Wrong Rationality and Mainstream Philosophy by 20 Mar 2011 20:28 UTC; 163 points) (
- Thou Art Physics by 6 Jun 2008 6:37 UTC; 151 points) (
- Philosophy: A Diseased Discipline by 28 Mar 2011 19:31 UTC; 150 points) (
- If a tree falls on Sleeping Beauty... by 12 Nov 2010 1:14 UTC; 147 points) (
- Righting a Wrong Question by 9 Mar 2008 13:00 UTC; 129 points) (
- Reductionism by 16 Mar 2008 6:26 UTC; 126 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 118 points) (
- [Transcript] Richard Feynman on Why Questions by 8 Jan 2012 19:01 UTC; 116 points) (
- A non-mystical explanation of “no-self” (three characteristics series) by 8 May 2020 10:37 UTC; 113 points) (
- Explaining vs. Explaining Away by 17 Mar 2008 1:59 UTC; 109 points) (
- Branches of rationality by 12 Jan 2011 3:24 UTC; 107 points) (
- Tips and Tricks for Answering Hard Questions by 17 Jan 2010 23:56 UTC; 98 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Building Phenomenological Bridges by 23 Dec 2013 19:57 UTC; 95 points) (
- Zombies Redacted by 2 Jul 2016 20:16 UTC; 94 points) (
- Were atoms real? by 8 Dec 2010 17:30 UTC; 93 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- Rationality Exercises Prize of September 2019 ($1,000) by 11 Sep 2019 0:19 UTC; 89 points) (
- Why “cause area” as the unit of analysis? by 25 Jan 2021 2:53 UTC; 85 points) (EA Forum;
- Wrong Questions by 8 Mar 2008 17:11 UTC; 80 points) (
- LLM chatbots have ~half of the kinds of “consciousness” that humans believe in. Humans should avoid going crazy about that. by 22 Nov 2024 3:26 UTC; 77 points) (
- Worst Commonsense Concepts? by 15 Nov 2021 18:22 UTC; 73 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Possibility and Could-ness by 14 Jun 2008 4:38 UTC; 66 points) (
- Hard problem? Hack away at the edges. by 26 Sep 2011 10:03 UTC; 66 points) (
- Nonperson Predicates by 27 Dec 2008 1:47 UTC; 66 points) (
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- The Problem of the Criterion by 21 Jan 2021 15:05 UTC; 62 points) (
- Timeless Identity by 3 Jun 2008 8:16 UTC; 61 points) (
- The Meaning of Right by 29 Jul 2008 1:28 UTC; 61 points) (
- The Bedrock of Fairness by 3 Jul 2008 6:00 UTC; 56 points) (
- Meditations on Mot by 4 Dec 2023 0:19 UTC; 56 points) (
- Heading Toward: No-Nonsense Metaethics by 24 Apr 2011 0:42 UTC; 55 points) (
- Reality is weirdly normal by 25 Aug 2013 19:29 UTC; 55 points) (
- Seeing Red: Dissolving Mary’s Room and Qualia by 26 May 2011 17:47 UTC; 54 points) (
- Timeless Physics by 27 May 2008 9:09 UTC; 54 points) (
- An attempt to dissolve subjective expectation and personal identity by 22 Feb 2013 20:44 UTC; 53 points) (
- Are Deontological Moral Judgments Rationalizations? by 16 Aug 2011 16:40 UTC; 52 points) (
- A Prodigy of Refutation by 18 Sep 2008 1:57 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- A Study of Scarlet: The Conscious Mental Graph by 27 May 2011 20:13 UTC; 44 points) (
- 28 Nov 2011 16:13 UTC; 43 points) 's comment on LW Philosophers versus Analytics by (
- Surface Analogies and Deep Causes by 22 Jun 2008 7:51 UTC; 38 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- What is the group selection debate? by 2 Nov 2010 2:02 UTC; 38 points) (
- Compressing Reality to Math by 15 Dec 2011 0:07 UTC; 34 points) (
- Book Review: Philosophical Investigations by Wittgenstein by 12 Oct 2021 20:14 UTC; 34 points) (
- Triaging mental phenomena or: leveling up the stack trace skill by 23 Dec 2016 0:15 UTC; 32 points) (
- [About Me] Cinera’s Home Page by 7 Feb 2023 12:56 UTC; 30 points) (
- 31 Mar 2014 14:53 UTC; 29 points) 's comment on Explanations for Less Wrong articles that you didn’t understand by (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- You Don’t Have To Click The Links by 11 Sep 2022 18:13 UTC; 25 points) (
- The Protagonist Problem by 23 Oct 2011 3:06 UTC; 24 points) (
- If reductionism is the hammer, what nails are out there? by 11 Dec 2010 13:58 UTC; 23 points) (
- A cognitive algorithm for “free will.” by 14 Jul 2021 21:33 UTC; 23 points) (
- 21 Oct 2019 18:19 UTC; 22 points) 's comment on Why Are So Many Rationalists Polyamorous? by (
- What makes us think _any_ of our terminal values aren’t based on a misunderstanding of reality? by 25 Sep 2013 23:09 UTC; 22 points) (
- 8 Jul 2011 16:59 UTC; 20 points) 's comment on What would a good article on Bayesian liberty look like? by (
- Web of connotations: Bleggs, Rubes, thermostats and beliefs by 19 Sep 2018 16:47 UTC; 20 points) (
- 3 Jun 2014 0:00 UTC; 19 points) 's comment on Rationality Quotes June 2014 by (
- Understanding Simpson’s Paradox by 18 Sep 2013 19:07 UTC; 19 points) (
- A Rational Education by 23 Jun 2010 5:48 UTC; 19 points) (
- The most important step by 24 Mar 2018 12:34 UTC; 19 points) (
- Q: What has Rationality Done for You? by 2 Apr 2011 4:13 UTC; 18 points) (
- July 2011 review of experimental philosophy by 4 Sep 2011 2:51 UTC; 18 points) (
- When Intuitions Are Useful by 9 May 2011 19:40 UTC; 18 points) (
- The World as Phenomena by 5 Dec 2017 23:21 UTC; 18 points) (
- The Conscious Sorites Paradox by 28 Apr 2008 2:58 UTC; 18 points) (
- [Incomplete] What is Computation Anyway? by 14 Dec 2022 16:17 UTC; 16 points) (
- Lighthaven Sequences Reading Group #5 (Tuesday 10/08) by 2 Oct 2024 2:57 UTC; 15 points) (
- [LINK] Being No One (~50 min talk on the self-model in your brain) by 16 Jan 2012 18:18 UTC; 15 points) (
- On ‘Why Global Poverty?’ and Arguments from Unobservable Impacts by 13 Feb 2016 6:04 UTC; 15 points) (
- 26 Jan 2018 22:13 UTC; 15 points) 's comment on What are the Best Hammers in the Rationalist Community? by (
- 1 Feb 2011 23:50 UTC; 14 points) 's comment on What is Eliezer Yudkowsky’s meta-ethical theory? by (
- Reinforcement, Preference and Utility by 8 Aug 2012 6:23 UTC; 14 points) (
- Tarski Statements as Rationalist Exercise by 17 Mar 2009 19:47 UTC; 12 points) (
- LLM chatbots have ~half of the kinds of “consciousness” that humans believe in. Humans should avoid going crazy about that. by 22 Nov 2024 3:26 UTC; 11 points) (EA Forum;
- Can we always assign, and make sense of, subjective probabilities? by 17 Jan 2020 3:05 UTC; 11 points) (
- [SEQ RERUN] Dissolving the Question by 20 Feb 2012 4:58 UTC; 11 points) (
- 6 Dec 2022 18:19 UTC; 11 points) 's comment on Free Will is [REDACTED] by (
- That letter after B is not rendering on Less Wrong? by 16 Aug 2011 23:35 UTC; 11 points) (
- 7 Apr 2010 19:39 UTC; 10 points) 's comment on Open Thread: April 2010 by (
- Michael Jordan dissolves Bayesian vs Frequentist inference debate [video lecture] by 30 Aug 2011 1:12 UTC; 10 points) (
- Frankfurt Declaration on the Cambridge Declaration on Consciousness by 24 Oct 2021 9:54 UTC; 9 points) (
- Applied Mathematical Logic For The Practicing Researcher by 17 Oct 2021 20:28 UTC; 9 points) (
- 6 May 2020 18:34 UTC; 9 points) 's comment on A non-mystical explanation of insight meditation and the three characteristics of existence: introduction and preamble by (
- 12 May 2011 2:19 UTC; 9 points) 's comment on Personal Benefits from Rationality by (
- On ‘Why Global Poverty?’ and Arguments from Unobservable Impacts by 25 Feb 2016 23:17 UTC; 8 points) (EA Forum;
- Variables in Arguments as a Source of Confusion by 9 Jan 2014 13:16 UTC; 8 points) (
- 26 Jun 2023 0:59 UTC; 8 points) 's comment on My tentative best guess on how EAs and Rationalists sometimes turn crazy by (
- Rationality Reading Group: Part P: Reductionism 101 by 17 Dec 2015 3:03 UTC; 8 points) (
- 30 Apr 2020 6:50 UTC; 8 points) 's comment on Chris_Leong’s Shortform by (
- Most Analogies Are Wrong by 16 Apr 2021 19:53 UTC; 8 points) (
- 27 Mar 2012 15:27 UTC; 8 points) 's comment on Scenario analysis: semi-general AIs by (
- 10 Aug 2010 19:09 UTC; 7 points) 's comment on Two straw men fighting by (
- 8 Dec 2010 18:32 UTC; 7 points) 's comment on Were atoms real? by (
- 13 Aug 2018 1:54 UTC; 7 points) 's comment on Logarithms and Total Utilitarianism by (
- Risk and uncertainty: A false dichotomy? by 18 Jan 2020 3:09 UTC; 6 points) (
- 8 Jan 2010 19:05 UTC; 6 points) 's comment on Consciousness by (
- 19 Mar 2008 23:08 UTC; 6 points) 's comment on Savanna Poets by (
- 16 Nov 2013 21:17 UTC; 6 points) 's comment on Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism by (
- Frankfurt Declaration on the Cambridge Declaration on Consciousness by 24 Oct 2021 9:52 UTC; 5 points) (EA Forum;
- 25 Jun 2011 4:37 UTC; 5 points) 's comment on Reductionism reading list by (
- 22 Nov 2024 22:54 UTC; 5 points) 's comment on Consciousness as a conflationary alliance term for intrinsically valued internal experiences by (
- 11 Jan 2020 3:49 UTC; 5 points) 's comment on Book review: Rethinking Consciousness by (
- 17 May 2011 17:20 UTC; 5 points) 's comment on Rationality Boot Camp by (
- Avoiding metaphysics means giving bad philosophy a free pass by 16 Jun 2023 20:54 UTC; 5 points) (
- A First Attempt to Dissolve “Is Consciousness Reducible?” by 20 Aug 2022 23:39 UTC; 5 points) (
- 11 Dec 2012 3:08 UTC; 5 points) 's comment on By Which It May Be Judged by (
- 18 Jul 2016 15:57 UTC; 4 points) 's comment on Zombies Redacted by (
- 3 Jul 2012 7:21 UTC; 4 points) 's comment on Can anyone explain to me why CDT two-boxes? by (
- Don’t walk through the fire! Walk through the fire! by 11 Aug 2021 19:38 UTC; 4 points) (
- 14 Nov 2011 14:57 UTC; 4 points) 's comment on Amanda Knox: post mortem by (
- 6 May 2015 14:35 UTC; 4 points) 's comment on Debunking Fallacies in the Theory of AI Motivation by (
- 11 Mar 2008 2:58 UTC; 4 points) 's comment on Righting a Wrong Question by (
- 21 Dec 2013 10:04 UTC; 4 points) 's comment on Rationality Quotes December 2013 by (
- 17 Aug 2011 0:30 UTC; 3 points) 's comment on Are Deontological Moral Judgments Rationalizations? by (
- 8 Mar 2012 15:01 UTC; 3 points) 's comment on Emotional regulation, Part I: a problem summary by (
- 24 Oct 2020 20:57 UTC; 3 points) 's comment on Introduction to Cartesian Frames by (
- 6 Jun 2008 21:19 UTC; 3 points) 's comment on Thou Art Physics by (
- 14 Mar 2013 17:51 UTC; 3 points) 's comment on Exercise in dissolving by (
- 21 Nov 2022 20:13 UTC; 3 points) 's comment on By Default, GPTs Think In Plain Sight by (
- 20 Jul 2011 8:54 UTC; 3 points) 's comment on LW’s image problem: “Rationality” is suspicious by (
- 18 Jun 2023 20:12 UTC; 3 points) 's comment on Avoiding metaphysics means giving bad philosophy a free pass by (
- 7 Aug 2020 12:38 UTC; 3 points) 's comment on Tags Discussion/Talk Thread by (
- 1 Mar 2010 19:49 UTC; 2 points) 's comment on Open Thread: March 2010 by (
- 10 Jan 2018 19:34 UTC; 2 points) 's comment on Are these arguments valid? by (
- 21 Feb 2019 11:04 UTC; 2 points) 's comment on The Clockmaker’s Argument (But not Really) by (
- 1 Apr 2014 6:30 UTC; 2 points) 's comment on Explanations for Less Wrong articles that you didn’t understand by (
- 26 Feb 2012 6:21 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- Solution to the free will homework problem by 24 Nov 2019 11:49 UTC; 2 points) (
- 17 Jan 2010 18:11 UTC; 2 points) 's comment on Dennett’s heterophenomenology by (
- 12 Mar 2010 7:15 UTC; 2 points) 's comment on Taboo Your Words by (
- 31 Mar 2013 0:42 UTC; 2 points) 's comment on Want to be on TV? by (
- More Sequences—LW/ACX Meetup #279 (Wednesday, March 6th 2024) by 5 Mar 2024 0:13 UTC; 2 points) (
- September Meetup: Free Will by 4 Sep 2019 19:29 UTC; 2 points) (
- 29 Nov 2012 22:49 UTC; 2 points) 's comment on Intuitions Aren’t Shared That Way by (
- 23 Jan 2021 18:06 UTC; 2 points) 's comment on Deutsch and Yudkowsky on scientific explanation by (
- 28 Nov 2013 21:57 UTC; 2 points) 's comment on Welcome to Less Wrong! (6th thread, July 2013) by (
- 18 Jun 2023 17:12 UTC; 2 points) 's comment on Avoiding metaphysics means giving bad philosophy a free pass by (
- 5 Mar 2015 6:53 UTC; 2 points) 's comment on Reductionism by (
- 27 Sep 2011 14:41 UTC; 1 point) 's comment on [gooey request-for-help] I don’t know what to do. by (
- 10 Feb 2012 0:38 UTC; 1 point) 's comment on Counter-irrationality by (
- The Born Rule is Time-Symmetric by 1 Nov 2020 23:24 UTC; 1 point) (
- 12 Dec 2017 15:27 UTC; 1 point) 's comment on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) by (
- 11 Jun 2010 8:27 UTC; 1 point) 's comment on Diseased thinking: dissolving questions about disease by (
- 25 Aug 2013 12:55 UTC; 1 point) 's comment on Lesswrong Philosophy and Personal Identity by (
- 30 Nov 2011 5:40 UTC; 1 point) 's comment on LW Philosophers versus Analytics by (
- 2 Dec 2013 21:03 UTC; 1 point) 's comment on Failing to update by (
- 1 Mar 2012 15:27 UTC; 1 point) 's comment on Welcome to Less Wrong! (2012) by (
- 7 Jul 2023 9:27 UTC; 1 point) 's comment on Why it’s so hard to talk about Consciousness by (
- 13 Apr 2015 21:48 UTC; 1 point) 's comment on On not getting a job as an option by (
- 17 Mar 2008 15:28 UTC; 1 point) 's comment on Explaining vs. Explaining Away by (
- 30 May 2011 21:55 UTC; 1 point) 's comment on Raw silicon ore of perfect emptiness by (
- 17 Nov 2011 15:43 UTC; 1 point) 's comment on The curse of identity by (
- 12 Oct 2010 19:08 UTC; 1 point) 's comment on Of the Qran and its stylistic resources: deconstructing the persuasiveness Draft by (
- 22 Mar 2010 0:38 UTC; 0 points) 's comment on Open Thread: March 2010 by (
- 11 Feb 2010 20:53 UTC; 0 points) 's comment on My Fundamental Question About Omega by (
- 6 Sep 2017 14:32 UTC; 0 points) 's comment on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) by (
- 31 Mar 2010 3:14 UTC; 0 points) 's comment on Open Thread: March 2010, part 3 by (
- 1 Nov 2016 10:04 UTC; 0 points) 's comment on Open thread, Oct. 10 - Oct. 16, 2016 by (
- 13 Feb 2013 22:51 UTC; 0 points) 's comment on Realism : Direct or Indirect? by (
- 14 Sep 2012 17:33 UTC; 0 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 27 Aug 2013 9:27 UTC; 0 points) 's comment on Reality is weirdly normal by (
- 24 May 2011 15:02 UTC; 0 points) 's comment on Dissolution of free will as a call to action by (
- 9 Jan 2011 1:09 UTC; 0 points) 's comment on How do you use the phrase “free will”? by (
- 5 Jul 2013 13:38 UTC; 0 points) 's comment on Open Thread, July 1-15, 2013 by (
- 2 May 2014 1:49 UTC; 0 points) 's comment on Open Thread, April 27-May 4, 2014 by (
- 16 Aug 2011 23:46 UTC; -1 points) 's comment on Are Deontological Moral Judgments Rationalizations? by (
- SBF, Pascal’s Mugging, and a Proposed Solution by 18 Nov 2022 18:39 UTC; -1 points) (
- 13 Aug 2018 8:36 UTC; -1 points) 's comment on Righting a Wrong Question by (
- 3 Feb 2010 0:01 UTC; -1 points) 's comment on Rationality Quotes: February 2010 by (
- 7 Oct 2023 10:09 UTC; -2 points) 's comment on Thomas Kwa’s MIRI research experience by (
- 28 Feb 2012 21:48 UTC; -5 points) 's comment on My summary of Eliezer’s position on free will by (
- Does consciousness persist? by 14 Feb 2015 15:52 UTC; -15 points) (
I have no idea why or how someone first thought up this question. People ask each other silly questions all the time, and I don’t think very much effort has gone into discovering how people invent them.
However, note that most of the silly questions people ask have either quietly gone away, or have been printed in children’s books to quiet their curiosity. This type of question- along with many additional errors in rationality- seems to attract people. It gets asked over and over again, from generation unto generation, without any obvious, conclusive results.
The answer to most questions is either obvious, or obviously discoverable- some easy examples are “Does 2 + 2 = 4?”, or “Is there a tiger behind the bush?”. This question, however, creates a category error in the human linguistic system, by forcibly prying apart the concepts of “sound” and “mental experience of sound”. Few people will independently discover that a miscategorization error has occurred; at first, it just seems confusing. And so people start coming up with incorrect explanations, they confuse a debate about the definition of the word “sound” with a debate about some external fact (most questions are about external facts, so this occurs by default), they start dividing into “yes” and “no” tribes, etc.
At this point, the viral meme-spreading process begins. An ordinary question (“Is the sky green?”) makes reference to concepts we are already familiar with, and interrelates them using standard methodology. A nonsensical question either makes reference to nonexistent concepts (“Are rynithers a type of plawistre?”), or uses existing concepts in ways that are obviously incorrect (“Is up circular?”). Our mind can deal with these kinds of questions fairly effectively. However, notice the form of a question asked by the tribal chief/teacher/professor/boss: things like “Does electromagnetism affect objects with no net charge?”. Even at large inferential distances, the audience will probably pick up on some of the concepts. Most laymen have heard of “electromagnetism” before, and they have a vague idea of what a “charge” is. But they lack the underlying complexity- the stuff beneath the token “electromagnetism”- needed to give a correct answer.
From the inside, this sounds pretty much like the makes-a-sound question: familiar concepts (“tree”, “falling”, “sound”) are mixed together in ways which aren’t obviously nonsense, but don’t have a clearly defined answer. The brain assumes that it must lack the necessary “underlying knowledge” to get past the confusion, and goes on a quest to discover the nonexistent “knowledge”. At the same time, the question conveys an impression of intelligence, and so the new convert tells it to all of his friends and co-workers in an attempt to sound smarter. Many moons ago, this exact question even appeared in a cartoon I saw, as some sort of attempt to get kids to “think critically” or whatever the buzzword was.
I think a brain architecture/algorithm that would debate about free will would have been adapted for large amounts of social interaction in its daily life. This interaction would use markedly different skills (eg language) from those of more mundane activities. More importantly it would require a different level of modeling to achieve any kind of good results. One brain would have to contain models for complicated human social, kin and friendly relationships, as well as models for individuals’ personalities.
At the center of the mesh of social interactions would be the tightest wad connections. That would be the brain/person, interacting with and modeling all the other members of their tribe/society. However, their brain cannot model itself modeling others with perfect fidelity, and so many simplifications are made even there. These simplifications pile on top of the perceptual differences that a human sees between (itself, other humans) and (everything else). A whole different mental vocabulary arises between descriptions/models of fellow humans and descriptions/models of everything else. Only in humans does it make predictive sense to talk about intent, capability, and inclination, and the wide gap between these kinds of perceived “properties” of fellow socially interacting humans, and the generally much simpler properties seen in inanimate objects and animals, leads the brain to allocate them to widely separated groups of buckets. It is this perceived separation in mental thing-space that leads to the the a free-will boundary being drawn around the cluster of socially interacting humans. When this boundary is objected to, people go their natural arguing ways.
This is just a first attempt, so I think I may have fallen for some of the traps specifically proscribed against in the post. I hope others will attempt to put up their own explanations and maybe even poke some holes in mine :)
I’ll definitely pay attention to further comments on this homework assignment.
I would say: people have mechanisms for causally modeling the outside world, and for choosing a course of action based on its imagined consequences, but we don’t have a mechanism for causally modeling the mechanism within us that makes the choice, so it seems as if our own choices aren’t subject to causality (and are thus “freely willed”).
However, this is likely to be wrong or incomplete, firstly because it is merely a rephrasing of what I understand to be the standard philosophical answer, and secondly because I’m not sure that I feel done.
This feels to me like part of the puzzle, as you say.
I think the other part is some quality of mind-like-ness (or optimising-agent-ness, if you prefer). People rarely attribute free will to leaves in the wind, despite their inability to accurately model their movements. On the other hand, many people do regularly attribute something suspiciously free-will-like to evolution(s).
I don’t have a good idea how either of these two concepts should be represented, or attached to one another, though.
A difference of predictions between Maksym’s proposed answer and mine occurs to me. If the sense of free will comes from not being able to model one’s own decision process, rather than from taking the intentional stance towards people but not other things, then I would think that each individual would tend to think that she has free will, but other people don’t. Since this is not the default view, my answer must be wrong or very incomplete.
“Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.”
This line goes in that book you’re going to write.
You will find that the need to nail things down is mostly a male thing. Women are more driven to add complexity, as more interesting to them.
http://home.att.net/~rhhardin9/vickihearne.womenmath.txt
And ``algorithm″ is a picture in Wittgenstein’s sense.
A warning to those who would dissolve all their questions:
Why does anything at all exist? Why does this possibility exist? Why do things have causes? Why does a certain cause have its particular effect?
I find that for questions like these, it is better to ask “how” than to ask “why”. When you replace “why” with “how”, the questions
Why does anything exist? We observe that everything seems to obey simple mathematical rules. Do these rules become what we observe as reality, and if so, how?
Why does this possibility exist? How is it that we observe only this possibility?
Why do thing have causes? How does causality work?
Why does a certain cause have its particular effect? How does causality work in this particular case?
I don’t think this answer meets the standards of rigour that you set above, but I’m increasingly convinced that the idea of free will arises out of punishment. Punishment plays a central role in relations among apes, but once you reach the level of sophistication where you can ask “are we machines”, the answer “no” gives the most straightforward philosophical path to justifying your punishing behaviour.
Why? If the answer is “no” then applying a proper punishment causes the nebulous whatsit in charge of the person’s free will to change their future behaviour.
If the answer is “yes” then applying a proper punishment adjusts the programming of their brain in a way that will change their future behaviour.
The only way a “yes” makes it harder to justify punishing someone is if you overexpand a lack of “free will” to imply “incapable of learning”.
Things in thingspace commonly coming within the boundary ‘free will’ :
moral responsibility could have done otherwise possible irrational action possible self-sacrificial action gallantry and style (thanks to Kurt Vonnegut for that one) non-caused agency I am a point in spacetime and my vector at t+1 has no determinant outside myself whimsy ‘car c’est mon bon désir’ absolute monarchy you can put a gun at my head and I’ll still say ‘no’ idealistic non-dualism consciousness subtending matter disagreeing with Mum & Dad disagreeing with the big Mom & Pop up there in the White House armed response no taxation without representation … no taxation even with representation (daft ) ‘No dear not tonight I’ve got a headache....’ ….
aw hell, just go read Dennett : ‘Elbow Room’, he did it better than I could.
Only in humans does it make predictive sense to talk about intent, capability, and inclination, and the wide gap between these kinds of perceived “properties” of fellow socially interacting humans, and the generally much simpler properties seen in inanimate objects and animals, leads the brain to allocate them to widely separated groups of buckets. It is this perceived separation in mental thing-space that leads to the the a free-will boundary being drawn around the cluster of socially interacting humans.
careful there. animistic beliefs are quite widespread in tribal societies, so the notion that the brain allocates two entirely distinct clusters to humans and animals vs. inanimate objects is quite suspect.
When you’re done, you’ll know you’re done, but unfortunately the reverse implication does not hold.
So when you have the impression you are done, you are not necessarily done because some have this impression without really being done. But then when you are really done, you won’t actually know you are done, because you will realize that this impression of being done can be misleading.
So here’s your homework problem: What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about “free will”?
I’ve written up my answer to this on my blog.
I claim that the reason we posit a thing called free will is that almost all of our decision-making processes are amenable to monitoring, analysis and even reversal by “critic” algorithms that reside one (or more) levels higher up. [I say almost all, because the top level has no level above it. The buck really does stop there]. There would probably be no feeling of free will if there were only two levels, but I think that with three or more, you have situations where … continue reading
Robin: So when you have the impression you are done, you are not necessarily done because some have this impression without really being done. But then when you are really done, you won’t actually know you are done, because you will realize that this impression of being done can be misleading.
You’d think it would work that way, but it doesn’t. Are you awake or asleep right now? When you’re asleep and dreaming, you don’t know you’re dreaming, so how do you know you’re awake?
If you claim you don’t know you’re awake, there’s a series of bets I’d like to make with you...
As usual, this is better settled by experiment than by “I just know”. My favourite method is holding my nose and seeing if I can still breathe through it. Every time I’ve tried this while dreaming, I’ve still been able to breathe, and, unsurprisingly, so far I’ve never been able to while awake. So if I try that, then whichever way it goes, it’s pretty strong evidence. There — now it’s science and there’s no need to assume “I feel that I know I’m awake” implies “I’m awake”.
Of course, if you’re the sort of person who never thinks to question your wakefulness while dreaming, then the fact that you’ve thought of the question at all is good evidence that you’re awake. But you need a better experiment than that if you also want to be able to get the right answer while you actually are dreaming.
[Apologies if replying to super-old comments is frowned upon. I’m reading the whole blog from the beginning and occasionally finding that I have things to say.]
I have been reading LW since the beginning and have not seen anyone object to replies to super-old comments (and there were 18-month-old comments on LW when LW began because all Eliezer’s Overcoming-Bias posts were moved to LW).
Moreover, a lot of reader will see a reply to a super-old comment. We know that because people have made comments in really old comment sections to the effect of “if you see this, please reply so we can get a sense of how many people see comments like this”.
Moreover, discouraging replies to super-old comments discourages reading of super-old comments. But reading super-old comments improves the “coherence” of the community by increasing the expected amount of knowledge an arbitrary pair of LW participants has in common. (And the super-old stuff is really good.)
So, reply away, I say.
Archaeologist here, I’ll be taking this comment as permission!
That’s awesome.
I never devised anything as cool as that, but I did discover a pretty reliable heuristic: If I ever find myself with any genuine doubt about whether this is a dream, then it definitely is a dream.
Or in other words not feeling like you “just know” you’re awake is very strong evidence that you’re not.
It’s funny that the working reality tests for dreaming are pretty stupid and decidedly non-philosophical. For instance, the virtual reality the brain sets up for dreams apparently isn’t good enough to do text or numbers properly, so when you are dreaming you’re unable to read the same text twice and see it saying the same thing, and digital clocks never work right. (There’s an interesting parallel here to the fact that written language is a pretty new thing in evolutionary scale and people probably don’t have that much evolved cognitive capacity to deal with it.)
There’s a whole bunch of these: http://en.wikibooks.org/wiki/Lucid_Dreaming/Induction_Techniques#Reality_checks
This reminds me of a horrible nightmare I had back in High School. It was the purest dream I had ever had: the world consisted of me, a sheet of paper, and a mathematical problem. Every time I got to the bottom of the problem, before starting to solve it, I went back up to make sure I had gotten it right… only to find it had CHANGED. That again, and again, and again, until I woke up, a knot in my stomach and covered in sweat.
To realize that this dream has an explanation based on neural architecture rather than on some fear of exams is giving me a weird, tingly satisfaction...
Isn’t fear of exams also due to neural architecture?
How does that work out?
It is indeed. I can’t take credit for it, though; don’t remember where I learned it, but it was from some preexisting lucid dreaming literature. I think it’s an underappreciated technique. They usually recommend things like seeing if light switches work normally, or looking at text and seeing if it changes as you’re looking at it, but this is something that you can do immediately, with no external props, and it seems to be quite reliable.
That’s similar to what I originally did, but it doesn’t always work — false awakenings (when you dream that you’re waking up in bed and everything’s normal) are especially challenging to it. In those cases I usually feel pretty confident I’m awake. Still, that heuristic probably works well for most dreams that are actually dreamlike.
Do other people have the same problem I do, then? When I’m dreaming, I often find that it’s dark and that light switches don’t work. I’m always thinking that the light bulbs are burnt out. It’s so frustrating, ’cause I just want to turn on the light and it’s like I never can in a dream.
That’s exactly my method too.
“Do you think that’s air you’re breathing now?”
Morpheus
When I dream about being underwater, I can breathe in the dream, but I also am under the impression that I’m holding my breath somehow, even though I’m breathing. Like, I’ll “hold my breath” only, I’ve just made the mental note to do it and not actually done it. But it won’t be clear to me in the dream whether or not I’m holding my breath, even though I’m aware that I’m still breathing. It’s weird and contradictory, but dreams are capable of being like that. It’s like how in a dream, you can see someone and know who they’re supposed to be, even though they may look and act nothing like that person they supposedly are. Or how you can be in both the first and third person perspective at the same time.
Heh, I’ve recently had a few weird half-lucid dreams, where on some level I seem to know that I’m dreaming, but don’t follow this to its logical conclusions and don’t gain much intentionality from it… In one of them, I ran into a friend I hadn’t seen in a long time and later found he’d left something of his with me, and I wanted to return it to him. So I thought I’d look him up on Facebook and message him there; but, I reasoned, this is a dream, so what if that wasn’t who I thought it was, but just someone else who looked exactly like him? So I felt I’d rather avoid going that route lest I message him and then feel foolish if it did turn out to be someone else, somehow accounting for this aspect of dreams but not noticing that this being a dream meant there was no real social risk to me and no pressing need to return his property in the first place. (Also kind of amusing that in retrospect he actually didn’t look that much like the person he was supposed to be, yet in the dream I was able to know who he was while wondering “what if that was someone else who just looked like him?”.)
Last night I had a dream which for some time rendered reality in aerial view as a sprite grid resembling old Gameboy RPGs, including a little pixel character who I knew was me.
My test is even easier—I rarely remember dreaming. It’s very similar to just blacking out. I lay down, fall asleep, and then 6-7 hours have gone by and I’m being awoken by my alarm clock. I sometimes remember parts of a particularly disturbing dream but they don’t have any detail after I wake up, and within hours it is almost completely gone. In these cases it may feel like a half hour or so has passed.
I had a dream of that type a few days ago, and other than the impression that it was disturbing I can’t remember a single thing about it. Nothing.
Stuff that happens when I’m awake, though, I remember very well.
I remember about three dreams per night with no effort. Sometimes when I wake up I can remember more, but then it’s impossible for me to remember them all for long. If I want to remember each of four or more dreams, I have to rehearse them immediately, otherwise I will usually forget all but three. The act of rehearsing makes it harder to remember the others, and it’s weird to wake up with 6-7 dreams in my mental cache, knowing that I can’t keep them all because after I actively remind myself what 3-4 were about the others will be very faint and by the time I have thought about five the others will be totally gone.
In related(?) news, often my brain wakes up before my body, and I can’t move so much as my eyeballs! It’s like the opposite of sleepwalking.
If I’m lying in bed, totally “locked in” and remembering a slew of dreams, I know I am awake. No one has complicated thoughts about several dreams from totally different genres while experiencing that one is unable to move a muscle without being awake.
If I’m arguing to the animated electrified skeleton of a shark that has made itself at home in my pool that he’d be better off joining his living brother in a lake in the Sierra Nevadas, who is eating campers I tell him to in exchange for hot dogs...I have a good chance of suspecting it’s a dream, even within the dream.
Neither of these are tests, of course.
Just in case you aren’t already aware (and haven’t become aware since this was written) -- this is a common phenomenon (from which I suffer also), described here:
http://en.wikipedia.org/wiki/Sleep_paralysis
I’m not sure if I’ve experienced sleep paralysis before, but I’ve had experiences very similar to it. I will “wake up” from a dream without actually waking up. So I will know that I’m in bed, my mind will feel conscious, but my eyes will be closed and I’ll be unable to move. Ususally I try to roll around to wake myself up, or to make noise so someone else will notice and wake me up. But it doesn’t work, ’cause I can’t move or make noise, even though it feels like I am doing those things (and yet I’m aware that I’m not, because I can feel the lack of physical sensation and auditory perceptions). But when I actually wake up and can move, it feels like waking up, rather than just not being paralyzed any more. And sometimes when I’m in that “think I’m awake and can’t move” state, I imagine my environment being different than it actually is. Like, I might think I’m awake and in my own bed, and then when I wake up for real, I realize I’m at someone else’s place. Which makes me think I wasn’t actually awake when I felt like I was. But it feels awfully similar to sleep paralysis, so I’m not sure if it is sleep paralysis or just something very similar.
I would say that’s very likely sleep paralysis; it it is very similar to my own experience.
As far as I can tell, without an outside observer to confirm this, my eyes are actually open during SP. After enough episodes I do occasionally get what seems to evidence of this (e.g. I will notice details of the world around me that I can clearly see when I wake up are actually there.)
Sleep paralysis is associated with hallucinations; particularly (for me and I think also in general) feelings of fear, or hallucinations of some entity ‘coming for you’, or people talking indistinctly, or people calling your name.
Generally you (or at least I) can’t really think well in that state; as a state of consciousness, I guess I would describe it as “between dreaming and wakefulness.” Sometimes I’m aware of what’s occurring, and sometimes I’m not.
...I’ve had some pretty complicated dreams, where I’ve woken up from a dream(!), gone to work, made coffee, had discussions about the previous dream, had thoughts about the morality or immorality of the dream, then sometime later come to a conclusion that something was out of place(I’m not wearing pants?!) then woken up to realize that I was dreaming. I’ve had nested dreams a good couple of layers deep with this sort of thing going on.
That said I think you have something there, though. Sometimes I wake up (Dream or otherwise) and I remember my dream really vividly, especially when I awake suddenly, due to an alarm clock or something
But I’ve never had a dream that I struggled to remember what was in my dream inside of my dream. At the least, such an activity should really raise my priors that I’m toplevel.
One method to check if you’re dreaming is to hold your nose shut and try to breathe through it—if you’re dreaming, your nose will work “normally”, whereas if you’re awake actual physics will take effect. (Note: every time I’ve done this while dreaming, I immediately got very excited and woke up.)
So if you do have trouble telling dream from reality, you don’t remember it. :-)
Yep :)
Ditto.
Then there’s Edmond Jabes, on freedom and how words come to mean anything.
Eliezer, you seem to be saying that the impression you get when you are really done feels different from the impression you get when you ordinarily seem to be done. But then it should be possible to tell when you just seem to be done, as this impression is different. I can imagine that sometimes our brains just fail to make use of this distinction, but it is quite another to claim that we could not tell when we just seem to be done, no matter how hard we tried.
Eliezer, also, the bet your proposed would only be enforced in situations where I am not dreaming, so it would really be a bet conditional on not dreaming, which defeats the purpose.
1) Some people claim they can recognize that they’re in a dream state.
2) The quoted claims are an example of the rhetorical fallacy known as equivocation.
Given the scientific evidence behind lucid dreaming, I wouldn’t call it a ‘claim’. If someone can, while in the midst of REM sleep, as determined by a polysomnograph, deliberately transmit a previously-agreed-upon signal to an external observer, that’s reasonable evidence that the person in question is aware that they are, in fact, in a dream state.
Of course, if you disagree, it would be appropriate to research the phenomenon yourself.
When I’m dreaming, I always know I’m dreaming, and when I’m awake I always know I’m awake.
I realize that this doesn’t apply to many other people, however… even the second part.
A fuller explanation of the preceding: As an example of Robin’s point, “I can imagine that sometimes our brains just fail to make use of this distinction,” the reason that some people don’t know when they’re dreaming is that that are unable, at that time, to pay attention to all the aspects of their experience; otherwise they would be able to easily distinguish their state from the state of being awake, because the two states are very different, even subjectively. I pay attention to these aspects even while dreaming, and so I recognize that I’m dreaming.
Ughh more homework. Overcoming bias should have a sister blog called Overcoming laziness.
Eliezer, you seem to be saying that the impression you get when you are really done feels different from the impression you get when you ordinarily seem to be done. But then it should be possible to tell when you just seem to be done, as this impression is different.
Yes, exactly; it feels different and you can tell the difference—but first you have to have experienced both states, and then you have to consciously distinguish the difference and stay on your guard. Like, someone who understands even classical mechanics on a mathematical level should not be fooled into believing that they understand string theory, if they are at all on their guard against false understanding; but someone who’s never understood any physics at all can easily be fooled into thinking they understand string theory.
I think I’ll give this a try. Let’s start with what a simple non-introspective mind might do:
Init (probably recomputed sometimes, but cached most of the time): I1. Draws a border around itself, separating itself from the “outside world” in its world model. In humans and similarly embodied intelligences you could get away with defining the own body as “inside”, if internal muscle control works completely without inspection.
Whenever deciding on what to output: A1. Generates a list of all possible next actions of itself, as determined in I1. For human-like embodieds, could be a list of available body movements. A2. Computes a probability distribution about the likely future states of the world resulting from each action, consulting the internal world-model for prediction. Resolution, temporal range and world-model building are beyond the scope of this answer. A3. Assigns utilities to each considered future state. A4. Assigns preferences to the probability distribution of futures. This could e.g. use Expected Utility or some satisficing algorithm. A5. Chooses the possible next action with the most-prefered distribution of futures. Tie breaking is implementation defined. A6. Execute that action.
As part of A2, the reactions of other intelligent entites are modeled as part of the general world model. This kind of mind architecture does not model itself in the present; that’d lead to infinite recursion: “I’m thinking about myself thinking about myself thinking about …”. It also wouldn’t achieve anything, since the mind as instantiated necessarily has a higher resolution than any model of itself stored inside itself. It will, however, model past (for lost data) or future versions of itself.
The important point here is that this mind doesn’t model itself while computing the next action. In the extreme case it needn’t have facilities for introspection at all. Humans obviously have some such facilities. Either inbetween deciding on output, inbetween the individual steps A1-A6, completely in parallel, or some combination of those, humans spend cputime to analyze the algorithm they’re executing, to determine systematic errors or possibilities for optimization. I’ll call the neural module / program that does this the introspector. When introspecting a thread which generates motor output (A1-A6 above), one (very effective) assumption of the introspected algorithm will always turn out to be “My own next action is to-be-determined. It’d be ineffective for me to run a model of myself to determine it.”. For a mind that doesn’t intuitively understand the self-reference and infinite-recursion parts, this turns into “My own actions can’t be modeled in advance. I have free will.”.
In the cases where the introspected thread is running another instance of the introspector, the introspector still isn’t attached to its own thread; doing that would lead to infinite recursion. Each introspector will work similarly to the motor-output-choosing algorithm described above, except that the generated output will be in the form of new mental heuristics. Therefore, the same “It’d be ineffective to run a model of myself to determine my next action.” assumption in the algorithm can be observed, and “Free will.” is still the likely conclusion of a mind that doesn’t understand the rationale behind the design.
That’s a model along the lines of the one I was thinking of in response to the question; any number of simple algorithms for processing data, creating a worldview, determining the expected utility of a series of actions, and choosing the action which seems to have the greatest utility might believe it has ‘free will’, by the definition that its actions cannot be predicted, if it is not capable of understanding its own nature.
Humans are, of course, more complicated than this, but idea alone produces the question… is your mind the sort of algorithm which, if all of its processing details, utility functions, and available worldview data are fully known, will produce output which can be predicted in advance, given the information? That doesn’t feel like an ending, but it is, perhaps, grounds to explore further.
Following up...
Having (almost) finished the Quantum Physics sequence since this last comment, and come to the point at which this particular assignment is referred to, I figured I’d post my final conclusion here before ‘looking at the answer’, as it were.
Given a basic understanding of QM, and further understanding that macroscopic phenomenon are an extension of those same principles… Knowing that nothing ‘epiphenomenal’ is relevant to the question of consciousness… And assuming that no previously unobserved micro-phenomenon is responsible for consciousness, by virtue of the fact that even if there were, there is, at present, no reason to privilege that particular hypothesis...
There’s no question left. What we call consciousness is simply our view of the algorithm from the inside. I believe that I have free will because it seems like the choices I make change the future I find myself in… but there are a great many other factors implicit in my thinking before I even arrive at the point of making a choice, and the fact that the probabilities involved in defining that future are impossible to calculate under existing technology does not mean that such a feat will never be possible.
That said, even full knowledge of the state of a given brain would not allow you to predict it’s output in advance, as even in isolation, that brain would divide into a very large number of possible states every instant, and QM proves that there is no way of determining, in advance, which state that brain will arrive at at any given time. This is not randomness, per se… given sufficient information, and isolation from contaminating entanglements, one could theoretically plot out a map of possible states, and assign probabilities to them, and have reasonable expectations of finding that mind in the predicted states after a determined time… but could never be certain of finding any given result after any amount of time.
That doesn’t mean that I don’t have control over my actions, or that my perception of consciousness is an illusion… what it does mean is that I run on the same laws of physics as anything else, and the algorithms that comprise my awareness are not specially privileged to ignore or supersede those laws. Realizing this fact is no reason to do anything drastic or strange… this is the way that things have been all along, and my acknowledgment of it doesn’t detract from the reality of my experiences. I could believe that my actions are determined by chance instead of choice, but that would neither be useful, nor entirely true. Whatever the factors that go into the operation of my cognitive algorithms, they ultimately add up to me. Given this, I might still believe that I have free will… while at the same time knowing that the question itself does not have the meaning I thought it had before I seriously considered the question.
This is not a problem. A computer runs a program the same way in almost all future outcomes, weighed by probability. QM shows how to determine what happens even in cases where it’s not as simple as that.
Do neurons operate at the quantum level? I thought they were large enough to have full decoherance throughout the brain, and thus no quantum uncertainty, meaning we could predict this particular version of your brain perfectly if we could account for the state and linkages of every neuron.
Or do neurons leverage quantum coherence in their operation?
I was once involved in a research of single ion channels, and here is my best understanding of the role of QM in biology.
There are no entanglement effects whatsoever, due to extremely fast decoherence, however, there are pervasive quantum tunneling effects involved in every biochemical process. The latter is enough to preclude exact prediction.
Recall that it is impossible to predict when a particular radioactive atom will decay. Similarly, it is impossible to predict exactly when a particular ion channel molecule will switch its state from open to closed and vice versa, as this involves tunneling through a potential barrier. Given that virtually every process in neurons is based on ion channels opening and closing, this is more than enough.
To summarize, tunneling is as effective in creating quantum uncertainty as decoherence, so you don’t need decoherence to make precise modeling impossible.
Quantum uncertainty is decoherence. All decoherence is uncertainty. All uncertainty is decoherence. If it’s impossible to predict the exact time of tunneling, that means amplitude is going to multiple branches, which, when they entangle with a larger system, decohere.
That is not quite the conventional meaning of decoherence, though. Of course, if I recall from your QM sequence, it is, indeed, yours. Let me explain what I think the difference is between the two phenomena: a spin measurement and the tuneling process.
During an interaction such as spin measurement, a factorized state of a quantum system becomes entangled with the (quantum) state of the classical system as some of the terms in the product state decay away (according to Zurek, anyhow). The remaining “pointer” states correspond to what is usually termed “different worlds” in the MWI model. I believe that this is your interpretation of the model, as well.
Now, consider radioactive decay, or, to simplify it, a similar process: relaxation process of an excited atom to its ground state, resulting in an emission of a photon. This particular problem (spontaneous emission) requires QFT, since the number of particles is conserved in QM (though Albert Einstein was the first to analyze it). Specifically, the product state of an excited atom and the ground (vacuum) state of the electromagnetic field evolves into a ground state of the atom and an excited state of the field (well, one of almost infinitely many excited states of the field, the “almost” part being due to the Planck-scale cutoff). There is virtually no chance of the original state to reappear, as it occupies almost zero volume in the phase space (this phase space includes space, momentum, position, spin, etc.). I believe even time is a part of it.
To call radioactive decay “decoherence”, one would have to identify the ground state of the field (electromagnetic vacuum) with the classical system that “measures” the excited atom. Calling a vacuum state a classical system seems like a bit of a stretch.
An alternative approach is that the measurement happens when an emitted photon is actually detected by some (classical) external environment, or when the atom’s state is measured directly by some other means.
I am not sure if there is a way to distinguish between the two experimentally. For example, Anton Zeilinger showed that hot fullerene molecules self-interfere less than cold ones, due to the emission of “soft photons” (i.e. radiating heat). His explanation is that the emitted radiation causes decoherence of the fullerene molecule’s factorized state, due to the interaction with the (unspecified) environment, and hotter molecules emit shorter-wave radiation, thus constraining the molecule’s position (sorry, I cannot find the relevant citation).
If you identify each of the branches in the MWI model with each possible excited state of the electromagnetic field, you would have to assume that the worlds keep splitting away forever, as all possible (measured) emission times must happen somewhere. This is even more of a stretch than calling the vacuum state a classical system.
Feel free to correct me if I misunderstood your point of view.
Interesting! I hadn’t thought about quantum tunneling as a source of uncertainty (mainly because I don’t understand it very well—my understanding of QM is very tenuous).
You don’t need macroscopic quantum entanglement to get uncertainty. Local operations (chemical reactions, say) could depend on quantum events that happen differently on different branches of the outcome, leading to different thoughts in a brain, where there’s not enough redundancy to overcome them (for example, I’ll always conclude that 6*7=42, but I might give different estimates of population of Australia on different branches following the question). I’m not sure this actually happens, but I expect it does...
I’m not sure I understand how quantum events could have an appreciable effect on chemical reactions once decoherance has occurred. Could you point me somewhere with more information? It’s very possible I misunderstood a sequence, especially the QM sequence.
I could also see giving different estimates for the population of Australia for slightly different versions of your brain, but I would think you would give different estimates given the same neuron configuration and starting conditions extremely rarely (that is, run the test a thousand times on molecule for molecule identical brains and you might answer it differently once, and I feel like that is being extremely generous).
Honestly I would think the decoherance would be so huge by the time you got up to the size of individual cells that it would be very difficult to get any meaningful uncertainty. That is to say, quantum events might be generating a constant stream of alternate universe brains, but for every brain that is functionally different from yours there would be trillions and trillions of brains that are functionally identical.
If you include electrons a single water molecule has 64 quarks, and many of the proteins and lipids our cells are made of have thousands of atoms per molecule and therefore tens of thousands of quarks. I am having a hard time envisioning anything less than hundreds of quarks in a molecule doing enough to change the way that molecule would have hooked into its target receptor, and further that another of the same molecule wouldn’t have simply hooked into the receptor in its place and performed the identical function. There may be some slight differences in the way individual molecules work, but you would need hundreds to thousands of molecules doing something different to cause a single neuron to fire differently (and consequently millions of quarks), and I’m not sure a single neuron firing differently is necessarily enough for your estimate of Australia to change (though it would have a noticeable effect given enough time, a la the butterfly effect). The amount of decoherance here is just staggering.
To summarize what I’m saying, you’d need at least hundreds of quarks per molecule zigging instead of zagging in order for it to behave differently enough to have any meaningful effect and probably at least a few hundred molecules per neuron to alter when/how/if that neuron fires, or whether or not the next neuron’s dendrite receives the chemical signal. I would think such a scenario would be extremely rare, even with the 100 billion or so neurons and 100 trillion or so synapses in the brain.
You may be right, I don’t really know what’s involved in chemical reactions. A chemist knowing enough theory of a physicist would likely be able to reliably resolve this question. Maybe you really know the answer, but I don’t know enough to be able to evaluate what you wrote...
See my comment.
Most of the proposed models in this thread seem reasonable.
I would write down all the odd things people say about free will, pick the simplest model that explained 90% of it, and then see if I could make novel and accurate predictions based on the model. But, I’m too lazy to do that. So I’ll just guess.
Evolution hardwired our cognition to contain two mutually-exclusive categories, call them “actions” and “events.”
“Actions” match: [rational, has no understandable prior cause]. “Rational” means they are often influenced by reward and punishment. Synonyms for ‘has no understandable prior cause’ include ‘free will’, ‘caused by elan vitale’ and ‘unpredictable, at least by the prediction process we use for things-in-general like rocks’.
“Events” match: [not rational, always directly caused by some previous and intuitively comprehendable physical event or action]. If you throw a rock up, it will come back down, no matter how much you threaten or plead with it.
We are born to axiomatically believe actions we take of this innate ‘free will’ category have no physical cause. In this model, symptoms might include:
believing there is an interesting category called ‘free will’
believing that arguing whether humans either belong to, or don’t belong to, this ‘free will’ category, is an interesting question
believing that if we don’t have ‘free will’, it’s wrong to punish people
believing that if we don’t have ‘free will’, we are marionettes, zombies, or in some other way ‘subhuman’.
believing that if we don’t understand what causes a thunderstorm or a crop failure or an eclipse, it is the will of a rational agent who can be appeased through the appropriate sacrifices
believing that if our actions are caused by God’s will, fate, spiritual possession, an ancient prophesy, Newtonian dynamics, or some other simple and easily-understandable cause, we do not have ‘free will’. However, if our actions are caused by an immaterial soul, spooky quantum mechanics, or anything else that ‘lives in another dimension beyond the grasp of intuitive reason’, then we retain ‘free will’.
I’m not particularly confident my model is correct, the human capacity to spot patterns where there are none works against me here.
Great post, Rolf Nelson.
HOMEWORK REPORT
With some trepidation! I’m intensely aware I don’t know enough.
“Why do I believe I have free will? It’s the simplest explanation!” (Nothing in neurobiology is simple. I replace Occam’s Razor with a metaphysical growth restriction: Root causes should not be increased without dire necessity).
OK, that was flip. To be more serious:
Considering just one side of the debate, I ask: “What cognitive architecture would give me an experience of uncaused, doing-whatever-I-want, free-as-a-bird Capricious Action that is so strong that I just can’t experience (be present to) being a fairly deterministic machine?”
Cutting it down to a bare minimum: I imagine that I have a Decision Module (DM) that receives input from sensory-processing modules and suggested-action modules at its “boundary”, so those inputs are distinguishable from the neuron-firings inside the boundary: the ones that make up the DM itself. IMO, there is no way for those internal neuron firings to be presented to the input ports. I guess that there is no provision for the DM to sense anything about its own machinery.
By dubious analogy, a Turing machine looks at its own tapes, it doesn’t look at the action table that determines its next action, nor can it modify that table.
To a first approximation, no matter what notion of cause and effect I get, I just can’t see any cause for my own decisions. Even if somebody asks, “Why did you stay and fight?”, I’m just stuck with “It seemed like a good idea at the time!”
And these days, it seems to me that culture, the environment a child grows up within, is just full of the accouterments of free will: make the right choice, reward & punishment, shame, blame, accountability, “Why did you write on the wall? How could you be so STUPID!!?!!”, “God won’t tempt you beyond your ability to resist.” etc.
Being a machine, I’m not well equipped to overcome all that on the strength of mere evidence and reason.
Now I’ll start reading The Solution, and see if I was in the right ball park, or even the right continent.
Thanks for listening.
There is some confusion about the meaning of free will. I can decide freely whether to drink a coffee or a tea, but you will see me allways choosing the coffee. Am I free to choose? Really?
I’m free to choose whether to use my bycicle to go to work, or take the bus. Well—it’s raining. Let’s take the bus.
A bloody moron stole my bike—now I’m not free to choose, I’m forced to take the bus.
There are inner and outer conditions which influence my decision. I’m not free to stop at the traffic light, but if I take the risk to pay the penalty, I’m free again. Maybe I internalized outer pressure in a way, that I can’t distinguish it from inner whishes, bias or fear and animus.
The second problem is, that as a model of our brain we look at it, if it was a machine or a computer. We know there a neurons, firing, and while facing our decision making process that way, it get’s something foreign to us—we don’t see it as part of ourself, like we see our feet in action while walking.
If you would tell somebody, that he isn’t walking, it’s his feet which walk, everybody laughs. Yes—the feet are part of him. He cannot walk without his feet. And firing neurons are the same thing as thinking.
The process of thinking is this machine in our head in action. It’s your machine—it’s you! And mine is mine, and it’s me.
So we don’t fall into the fall of distinction between ‘me’ and ‘my thoughts, my brain, some neurons, firing’. And we know, that there are inner and outer influences to our decision. We have a history, which influences whether we like the idea of driving by bus, or going by bicycle. There are some stronger and some not so strong influences, and maybe millions, so the process, to make a decision is too complex, to make a prediction in all cases.
I know, I drank coffee for the last 20 years and not tea—but on the other hand, if there is a strong influence, I might drink tea tomorrow. Mainly a disruption of my habits.
I might get forced to do something I don’t like, so it will be someone else’s decision, someone else’ freedom of choice.
Is it his brain, or is it my brain, which decides? It’s freedom, if you can decide. If your neurons decide. Your brain. It’s you.
[comment removed by author]
Yes: this page contains a link to his solution.
Eliezer, you wrote:
But when you’re really done, you’ll know you’re done. Dissolving the question is an unmistakable feeling...
I’m not so sure. There have been a number of mysteries throughout history that were explained by science, and the resolution didn’t feel immediately satisfying to people even though they do to us now—like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven’t gotten to the root of a problem, or whether it indicates that I just need time to become comfortable with the explanation. For instance, it feels to me like my moral intuitions are objectively correct rules about how people should and shouldn’t behave. Yet my reason tells me that they are simply emotional reactions built into my brain by some combination of biology and conditioning. I’ve gotten somewhat more used to that fact over time, but it certainly didn’t feel at first like it successfully explained why I feel that X is “wrong” or Y is “right.”
Eliezer, you wrote:
I’m not so sure. There have been a number of mysteries throughout history that were explained by science, and the resolution didn’t feel immediately satisfying to people even though they do to us now—like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven’t gotten to the root of a problem, or whether it indicates that I just need time to become comfortable with the explanation. For instance, it feels to me like my moral intuitions are objectively correct rules about how people should and shouldn’t behave. Yet my reason tells me that they are simply emotional reactions built into my brain by some combination of biology and conditioning. I’ve gotten somewhat more used to that fact over time, but it certainly didn’t feel at first like it successfully explained why I feel that X is “wrong” or Y is “right.”
Eliezer, you wrote:
I’m not so sure. There have been a number of mysteries throughout history that were resolved by science, but people didn’t immediately feel as if the scientific explanation really resolved the question, even though it does to us now—like the explanation of light as being electromagnetic waves.
I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven’t gotten to the root of a problem, or whether it indicates that I just need time to become comfortable with the explanation. For instance, it feels to me like my moral intuitions are objectively correct rules about how people should and shouldn’t behave. Yet my reason tells me that they are simply emotional reactions built into my brain by some combination of biology and conditioning. I’ve gotten somewhat more used to that fact over time, but it certainly didn’t feel at first like it successfully explained why I feel that X is “wrong” or Y is “right.”
Dissolving a question and answering it are two different things. To dissolve a question is to rid yourself of all confusion regarding it, so that either the question reveals itself to be a wrong question, or the answer will become ridiculous obvious (or at least, the way to answer it will become obvious).
In the second case, it would still be possible that the ridiculously obvious answer will turn out to be wrong, but this has little to do with whether or not the question has been dissolved. For example, we could one day find evidence that certain species of trees don’t make sound waves when they fall and there are no humans within a 10 mile radius. This won’t change the fact that the question was fully dissolved.
The neural explanation doesn’t seem parsimonious, given that there appears to be a much simpler cognitive “glitch” that causes the tree-falling-in-the-forest argument and the free will argument: our habitual propensity to mistake the communication devices known as words with the actual concepts they correspond to in our own minds. And as a natural consequence, people forget that the concept they associate with a word might be different from the concept another person associates with the same word.
One common result of these errors is that arguers forget to check that their definitions agree. That explains the how of the tree-falling-in-the-forest argument entirely, with no lingering doubts.
Another common result is the tendency you mention, of philosophers to try to answer questions as stated, as if the words must mean something coherent just because they sound like they should. This explains many of the free will arguments, although—here’s the tricky part—every thinker’s arguments would have to be explained via a slightly different manifestation of the error:
Thinker A reified. Thinker B equivocated on the meaning of “free.” Thinker C conflated agents because the words in their argument didn’t distinguish agency clearly enough. Nevertheless, the underlying error is the same. (Needless to say, the fact that everyone will interpret the words and make the argument differently precludes the possibility of a categorical dismissal of all concepts that anyone will ever call “free will.”)
As I understand it, there was no debate on free will before about three centuries ago. Since that time, the idea that we might all be automata has been taken somewhat seriously. In earlier times, it would have been considered absurd to question free will.
So, did our cognitive algorithm change back around the time of Galileo, Descartes, and Newton? Of course not. So how can the algorithm be “blamed” for the existence of a debate? By itself it cannot. So we have to imagine that the debate arises from the combination of algorithm and data. The algorithm is the same, only the data has changed. And clearly the new data which enables the debate is the evidence that the physical universe may be lawlike and deterministic, together with a learned bias toward monism.
Ok, so I think I have improved the question a little, but I have come nowhere close to either answering or dissolving it. So, come at it from a different direction:
My personal tapdance for avoiding the free-will debate is to refuse to see any contradiction between the claims “I have a choice” and “I am an automaton”. Seeing a contradiction there (I claim) is the flaw in our cognitive algorithm. And to make it completely obvious that there is no contradiction, we need to actually construct an automaton with free-will. How do we know when we have succeeded? Well, the automaton simply has to pass the free-will Turing test.
Now here is where it gets interesting. I imagine that a critic will say something like, “Sure, you can build a machine that can fool me. You can fool every one else too. But there is one person you can never fool. Yourself. You, having built the machine, will understand its workings, and hence will know that it doesn’t really have free will.”
Not too bad as an argument against the possibility of AFW (Artificial Free Will). But it gives us the hint that lets us dissolve the original question. We now see that free-will is in the eye of the beholder. To every one else, my automaton is free, to me it is not. Freedom is not a unary predicate. It is a two-place predicate—you need to identify both the candidate automaton/agent and the observer who evaluates it. One observer sees the device as determined, another observer sees it as free. And they can both be right. There is no contradiction. The free-will debate is hereby dissolved.
So, pop back to the original question. Why does there seem to be a real debate? Well, obviously, it is because there seems to be a real contradiction. So, what feature of the algorithm makes it seem that there must be a contradiction here? If my analysis is correct, it is the gizmo which biases us against concepts which are “subjective” (i.e. observer dependent) and in favor of an illusion of objective reality. A metaphysics that says “An agent is either free or it is not”.
My apologies for the length of this comment.
This is quite incorrect. Determinism (as opposed to the default folk psychology of free will) has been long debated; from Wikipedia:
This is a very incomplete list, which omits people like the Stoics such as Chrysippus; the other article mentions later the Atomists Leucippus and Democritus.
In Eastern tradition, there are many different takes on ‘karma’.
The atheist Carvaka held a deterministic scientific view of the universe, and a materialist view of the mind (although so little survives it’s hard to be sure). I’m not entirely clear on the Samkhya darsana’s position on causality, though their views on satkaryavada (as opposed to the common Indian position of asatkaryavada) sound determinist.
And of course, who can really generalize about all Buddhist schools’ positions? I have no doubt whatsoever that many Buddhist philosophies could be fairly described as completely determinist.
Here’s my attempt (I haven’t read the comments above in detail, as I don’t want the answer spoiled in case I’m wrong).
For whatever reason, it is apparent that the conscious part of our brain is not fully aware of everything that our brain does. Now let’s imagine our brain executing some algorithm, and see what it looks like from the perspective of our consciousness. At any given stage in the algorithm, we might have multiple possible branches, and need to continue to execute the algorithm along one of those possible branches. To determine which branch to follow, we need to do some computation. But that computation isn’t done on a conscious level (or rather, sometimes it is, but the fastest computations are done on a subconscious level). However, the computation is done in parallel, so our consciousness “sees” all of the possible next steps, and then feels as if it is choosing one of them. In reality, that “choice” occurs when all of the subconscious processes terminate and we pick the choice with the highest score.
I believe the conscious-unconscious separation have an advantage in human-human interaction (in the sense of game theory). It is easier for the conscious you to lie when you know less.
I have a similar, but slightly different theory, based on what I’ve read on neuroscience.
Let’s say you are sitting on a couch, in front of a plate of potato chips.
Several processes in your brain that your conscious mind are not aware of activate, and decide that you want to reach out and eat a potato chip. This happens in an evolutionary very ancient part of your brain
At this point, after your subconscious mind has created this desire but before you actually do it, your conscious mind becomes aware of it. At this point, your conscious mind has some degree of veto power over the decision. (What we usually perceive as “self control”). You may think it’s unhealthy to eat a potato chip right now, and “decide” not to (that is, your conscious mind algorithm overrides your instinctive algorithm.) This “self control” is not total, however; if you are hungry enough, you may not be able to “resist”. Also, if your conscious mind is distracted (say, your are playing a very involving video game), you may eat the chips without really noticing what you are doing.
So, from the point of view of your conscious mind; an idea came from somewhere else to eat chips, then your conscious mind “chose” if you should do it or not.
My $0.02: all it takes is a system a) without access to its own logs, and b) disposed to posit, for any event E for which a causal story isn’t readily available, a default causal story in which some agent deliberately caused E to advance some goal.
Given those two things, it will posit for its own actions a causal story in which it is the agent, since it’s the capable-of-agency thing most tightly associated with its actions.
Note that this does not require there not be free will (whatever that even means, assuming it means anything), it merely asserts that whether there is or not (or a third alternative, if there is one), the system will classify its actions as its own doing unless it has some specific reason to otherwise classify them.
Some rough notes on free will, before I read the “spoiler” posts or the other attempted solutions posted as comments here.
(Advice for anyone attempting reductions/dissolutions of free will or anything else: actually write notes, make them detailed when you can (and notice when you can’t), and note when you’re leaving some subproblem unsolved for the time being. Often you will notice that you are confused in all kinds of ways that you wouldn’t have noticed if you had kept all of it in your head. (And if you’re going to try a problem and then read a solution, this is a good way of avoiding hindsight bias.))
What kind of algorithm feels like free will from the inside?
Some ingredients:
Local preferences:
The algorithm doesn’t necessarily need to be an optimization process with a consistent, persistent utility function, but when the algorithm runs, there needs to be some locally-usable preference function over outcomes, since this is a decision algorithm.
Counterfactual simulation:
When you feel that you “could” make one of several (mutually exclusive) “choices”, that doesn’t mean that all of them are actually possible (for most senses of “possible” that we use outside the context of being confused about free will); you’re going to end up doing at most one of them. But it occurs to you to imagine doing any of them, because you don’t yet know what you’ll decide (and you don’t know what you’ll decide because this imagining is part of the algorithm that generates the decision). So you look at the choices you think you might make, and you imagine yourself making each of those choices. You then evaluate each imagined outcome according to some criterion (specifying which, I think, is far outside the scope of this problem), and the algorithm then returns the choice corresponding to the imagined outcome that maximizes that criterion.
(Imagining a maybe-impossible world — one where you make a specific decision which may not be the one you will actually make — consists of imagining a world to which all of your prior beliefs about the real world apply, plus an extra assumption about what decision you end up making. If we want to go a bit deeper: suppose you’re considering options A, B, and C, and you’re imagining what happens if you pick B. Then you imagine a world which is identical to (how you imagine) the real world, except with a different agent substituted for you, identical to you except that their decision algorithm has a special case for this particular situation consisting of “return B”.)
(I realize that I have not unpacked this so-called “imagining” at all. This is beyond my current understanding, and is not specific to the free will issue.)
Why does that feel non-deterministic?
Because we don’t have any way of knowing the outcome for sure other than just following the algorithm to the conclusion. Due to the mind projection fallacy, our lack of knowledge of our deterministic decisions feels like those decisions actually not being deterministically implied yet.
...Let me phrase that better: The fact that we don’t know what the algorithm will output until we follow it to its normal conclusion, feels like the algorithm not having a definite output until it reaches its conclusion. Since our beliefs about reality just feel like reality, our blank or hazy or changing map of the future feels like a blank or hazy or changing future; as is pointed out in “Timeless Causality”, changing our extrapolation of the future feels like changing the future. When you don’t know what decision you’ll make, that feels like the future itself is undecided. And the fact that we can imagine multiple futures until it’s not imaginary or the future anymore, feels like there are multiple possible futures until we pick one to go with.
Why does the idea of determinism feel non-free?
Well, there’s the whole metaphor of “laws”, to begin with. When we hear about fundamental physical laws, our intuition doesn’t go straight to “This is the fundamental framework in which everything in the universe happens (including everything about me)”. “Laws” sound like constraints imposed on us. It makes us imagine some causal force acting on us and restricting us from the outside; something that acts independently of and sometimes against mental causes, rather than what you see when you look at mental causes under a microscope (so to speak).
That also seems to explain why people think that physical determinism would preclude moral responsibility. When someone first tells you that everything about you is reducible to lawful physics, it can intuitively sound like being told that you’re under the Imperius curse or that you’re a puppet and some some demon called “Physics” is pulling the strings. If your intuition says that determinism means people are puppets, then surely it’s easy to think that implies people cannot be held responsible for their actions, clearly Physics must get the credit or the blame.
(In one sense, yes, physics must get the credit or blame — but only the region of physics that we call “you” for short.)
And there’s the fact that, if it’s explained poorly, the idea of physical determinism can sound about the same as the idea of fate. (Or even if it is explained well, but you pattern-match it as “fate” from the beginning and let that contaminate your understanding of the rest of the explanation.) Of course, the ideas couldn’t be more different, fate is the idea that your choices don’t matter because the outcome will be the same no matter what; and this (rightly) sounds non-free, because it implies that this algorithm you’re running doesn’t ultimately have any influence on the future. Physical determinism, on the other hand, says quite the opposite, that the future is causally downstream from your actions, which are causally downstream from the algorithm you’re running; but given sufficiently confusing/confused descriptions of determinism (like “everything is predetermined”), it is possible to mistake them for each other.
Why does the idea of predictability feel non-free?
The previous bit, on physical determinism feeling non-free, isn’t the whole story. Even when the idea of “lawfulness” isn’t invoked, people still think as though being theoretically predictable is a limitation on free will. They still wonder things like “If God is omniscient, then he must know every decision I will make, so how can I have free will?” (And atheists say things like this a lot to argue that an omniscient god is impossible because then we couldn’t have free will (particularly as an argument against religious traditions that argue (badly) against the problem of evil by saying that God gave us free will). I’m not sure if this is because it’s a soldier on their side or if they just don’t know reductionism. Probably some of both.) This probably goes back to the bit about the mind projection fallacy; if you don’t know what you’re going to do, that feels like reality itself being indeterminate, and if you’re told that reality itself is not indeterminate — that the territory isn’t blank where your map is blank — then, if you haven’t learned to strictly distinguish between the map and the territory, you’ll say “But I can see plainly that the territory is blank at that point!”, and you’ll dismiss the idea that your decisions could theoretically be predictable.
(Tangential to the actual reduction, but: this seems like it could be covered by a principle analogous to the Generalized Anti-Zombie Principle. If the thing you think “free will” refers to is something that you’d suddenly have less of if I built a machine that could exactly predict your decisions (even if I just looked at its output and didn’t tell you it even existed), then it’s clearly not the thing that causes you to think you have “free will”.)
Why do we “explain” free will in terms of mysterious substances placed in a separate realm declared unknowable by fiat?
I don’t have the cognitive science to answer that, and I’ll consider it outside the scope of the free will problem in particular, because that’s something we seem to do with everything (as in MAMQ), not just free will.
Your challenge does not prove anything. A very complex algorithms can never have free will. Complexity may limit predictability but does not demonstrate free will. Collision of two balls can be predicted. Three ball collision are much more difficult to predict. Hundreds of balls may be beyond our current technology to predict. There is a number of balls where computers the size of the Universe could not predict.
Free Will does not exist.
Big statement. I can hear the uproar.
Consider: After the Big Bang, the Universe cooled and matter coalesces. No Free Will was involved. Stars formed and exploded creating heavy elements. No Free Will was involved A cloud of dust coalesces with 8 planets. One in the sweet spot. No Free Will was involved. Life begins on the third planet from the star. No Free Will was involved. Animals crawled our the water. No Free Will was involved. An animal drops from the trees and walked on 2 feet. No Free Will was involved. The Sun rises above the horizon yesterday, today and tomorrow. No Free Will was / is / will be involved. In 200 years, we will all be dead. No Free Will will be involved. Chemical reactions occur in the human brain. No Free Will is involved. A Neuron fires. No Free Will is involved. Million neurons fire. No Free Will is involved. Every thing that did happen because it happened. No Free Will is involved. Tomorrow, things will happen because time moves on. No Free Will is involved.
All of these happen for one and only one reason. Time moves one direction, and there is only one time stream known or knowable to science.
This completely fails to acknowledge the point of the entire post. What does it mean to ask whether we have free will in the first place? What’s actually going on when someone asks it?
Yes, I failed to acknowledged the post because in 4 dimensional time-space the stack trace for considering Free Will is same as figuring how to get food or planets moving around a star. They are all physical results from an initial Cause.
Sure.
And as long as you’re just as indifferent to the state of mind of the individual who is executing those physical results as you would be to the putative state of mind of an orbiting planet, then there is no particular reason to engage with the two differently.
And that hypothetical indifference is itself the physical result of earlier causes, and the consequences of expressing that indifference (for example, engaging with people the same way you engage with planets in their orbits, and people being upset at that, and etc.) are just further physical results, and on and on.
Conversely, if I am not indifferent to individual states of mind (because my prior causes are hypothetically different than yours), then I may engage with people differently, and they may respond differently, and that is also a physical result emerging from prior causes.
It seems you propose a property of matter in a mind different from a rock and the physical laws for a mind are more than a rock. If we agree on that, discussion of Free Will exists, otherwise rocks are just as capable of discussing Free Will.
I would not have problems with not defining the property of mind matter or additional physical laws. Proposing something exists is the beginning of scientific process.
I wasn’t proposing any such thing, but yes, I do believe that the material properties of minds and rocks are different… for example, I’m 99+% confident that all minds are able to perform computations as a consequence of their material properties (and as a consequence of the physical laws that relate to those properties), and that most rocks are not able to do so as a consequence of their different material properties.
I find it unlikely that most rocks can discuss anything at all.
I was wrong to assume Mind has a physical existence. It’s an invalid to assert properties of minds and rocks together. Rocks are material, and Mind is not. Mind does not have any physical property. It is a property we sometimes ascribe to some matter. A human brain at birth contains matter, at death contains different matter. Both times contain the same mind. Human brain contains the same matter before death and right after death, brain before has mind property and does not have the mind property after.
What is the logic to say rocks do not have a mind? Just because we can not perceive the mind does not prove it does not exist. A tree falling in the forest always makes noise (air vibrations) with out a mind to hear the sound.
So if you simulated the thoughts of a newborn and the same person at death, you wouldn’t be able to tell them apart?
What does it mean to say that they contain the same mind despite being composed of different matter?
It seems like you’ve assigned some definitions to a set of terms, become invested in a position based on those definitions, and now frame any sort of dispute in which those terms come up as a conflict over that position. You’re using the same words as everyone else here, but you’re discussing an entirely different subject, and a confused one at that.
Yes, I make the point that these discussions include a presumption of something beyond Science as we know it. The only way to discuss life, mind, Will and the like needs to look at the Universe from outside, but the Universe is everything.
If we accept the premise of something beyond the Universe, sentience exists here and must extend there. Please continue the train of thought yourself. You may reject the logical inference anytime your beliefs are troubled but understand your rejection does not invalidate the conclusion.
Thanks for the feedback. I will be stopping this now.
My answer to that assignment is that i have no idea how that would work or how i could figure out how it would. Did i guess the password? if not then is it swordfish? Just give me a gold star!
I’ve been going through the sequences, and this is probably the post I disagree with most.
More importantly, rejecting a concept doesn’t solve the problem the concept is used for. The question to ask isn’t what the precise definition of free will is, or whether the concept is coherent. Ask instead “What problems am I trying to solve with this concept?”
Because we do use the concept to solve problems. People take actions, and those actions have effects on us. When do I retaliate against a harmful action, and when do I reward a beneficial action? How do I update my evaluation of the actor, in terms of the likelihood that they will repeat this kind of beneficial or harmful behavior?
It’s nonsense to say that free will doesn’t exist—because the problem the concept is used to solve does exist, and if you say free will doesn’t exist, you’ll still end up creating a concept just like it to solve this problem. And for the most part, people are already effectively using the concept to reward and punish appropriately. They are effectively solving a problem. By and large, they use the concept of free will to both accurately predict future behavior (epsitemic rationality) and effectively take action (instrumental rationality). Isn’t that what rationality is all about?
Now both sides haven’t effectively made that concept coherent with their knowledge of physics, one side thinking the concept is nonsense, and the other side thinking they’re not bound by the laws of physics. Both sides are making a mistake. The concept isn’t nonsense, because it solves real problems, and we are bound by the laws of physics.
See compatibilism for details. Problem solved. A Rationalist solves problems.
So, to know if an answer is complete, you go by how certain cognitive processes make you feel? Seriously? Feelings lie. All the time.
I am curious why your posts tend to treat questions like this (“Does free will exist?”) as being substantially different from questions like “Does some god exist?”
My tackle at this question: Why do people debate free will?
The topic itself is of intense interest to humans, because we’d like to believe we have it, or that it exists. This is because we’d like to believe we have control over our own actions and our thoughts, since that would give us the feeling that because of said control we can shape our surroundings in search of our own happiness, or that happiness is achievable. But the crutch of the problem is we can’t just believe in free will now, because we have no idea, no proof or theories on how it exists. Thus we can’t fully and wholly believe free will exists. But we want to, so we try to justify its existence to ourselves by focusing our confusion through the question “Is there free will?” and try to answer the question, as our past experiences have taught us by shaping our confusion into a question, the confusion can usually be absolved with an answer. And we know that if we can arrive at a logical conclusion, we will believe in the answer.
I’m not sure if I even approached the right question, but I feel like I’m done.
I actually use this fact to enable lucid dreaming. When I’m dreaming, I ask myself, “am I dreaming?” And then I answer yes, without any further consideration, as I’ve realized that the answer is always yes. Because when I’m awake, I don’t ask that question, because there’s never any doubt to begin with. So when I’m dreaming and I find myself unsure of whether or not I’m dreaming, I therefore know that I’m dreaming, simply because the doubt and confusion exists. It’s a method that’s a lot simpler (and more accurate) than trying to analyze the contents of the dream to see if it seems real.
I used to use a similar technique, but found the absence of pain was more reliable; you can start wondering if something is just a dream, but you can’t start feeling pinches in a dream.
I can feel pain in dreams. I’m not sure if I can self-inflict pain in dreams (I’ve never tried), but I’ve definitely felt pain in dreams.
Yeah, it doesn’t work for everyone, unfortunately. IIRC a (possibly slim) majority of people can’t feel pain in dreams—it’s probably connected to the mechanism that prevents you remembering pain - you know it was there but you don’t really experience it like other memories. That’s why pinching yourself is the traditional method of proving some thing isn’t a dream. Some people can’t read text or tell the time n dreams, it changes between viewings, is gibberish, blank etc. I can. AFAIK there is no method of lucid dreaming that works for everyone, you have to experiment.
I can also feel pain in dreams here.
EDIT: made small edits.
in my opinion, the question is brilliant and its importance is misunderstood, though EY somewhat dances around it.
Whether or not the tree makes a noise is irrelevant once no one can hear it, and thus whether or not the tree is heard is a pre-condition to knowledge that it has fell/made noise. the point then is that (i) the lack of truth to a statement and (ii) truth of a statement that cannot be understood are effectively the same thing.
In other words, what is pointless is trying to pin down truths that cannot be conclusively proven within the bounds of human comprehension (e.g., is there free-will, what is the meaning of life), because practically speaking you’re in the same place you would be if there was no answer—just arguing amongst those who choose to consider the question in the first place.
the best evidence that confirmation bias is real and ever-present is a website of similarly thinking people that values comments based on those very users’ reactions. perhaps unsurprisingly, those that conform to the conventional thought are rewarded with points. so i guess while the point system doesn’t actually work as a substantive matter, at least we are afforded a constant reminder that confirmation bias is a problem even among those that purport to take it into account.
of course, my poking fun will only work so long as i don’t get so many negative points that i can no longer question the conventional thought (gasp!). what is my limit? I’ll make sure to conform just enough to stay on here. :) The worst part is I’m not even trying to troll, I’m trying to listen and question at the same time, which is how i thought I’m supposed to learn!
This simply isn’t true. There are lots of ways I can know a tree has fallen, even if nobody has heard the tree fall.
what you’re saying is obviously true, but it goes beyond the information available. the question, limited the facts given, is representative of a larger point, which is the one I’m trying to explain as a general observation and is not limited to whether in fact That tree fell and made a noise.
btw, I never thanked you for our previous back and forth -- it was actually quite helpful, and your last comment in our discussion has kept me thinking for a couple weeks now, and perhaps in a couple more i will respond!
‘Free will’ is the halting point in the recursion of mental self-modeling.
Our minds model minds, and may model those minds’ models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.
This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others’ minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.
...d-do I get the prize?
You have, in the local currency.
So, you are saying that free will is an illusion due to our limited predictive power?
...hmm.
If we perfectly understood the decision-making process and all its inputs, there’d be no black box left to label ‘free will.’ If instead we could perfectly predict the outcomes (but not the internals) of a person’s cognitive algorithms… so we know, but don’t know how we know… I’m not sure. That would seem to invite mysterious reasoning to explain how we know, for which ‘free will’ seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
You know, that fits. We often fail to ascribe free will to others, talking about how “that’s not like him” and making the Fundamental Attribution Error (“he’s a murderer—he’s evil!”)
This means we have to ascribe free will to any sufficiently intelligent agent that knows about our existence, right? Because they’ll be modelling us modeling them modelling us?
Um...the halting problem+godel’s incompleteness theorem, aka you cannot predict yourself completely? I think i’m missing a piece or two, and I probably am thanks to having “incompleteness theorem and halting problem” as a cached thought.
At any rate, I made a comparison between free will and arbitrary code while thinking about this.
oh horrors.
You think the algorithms that power the human mind understand either the halting problem or the incompleteness theorem enough to develop intuitions about free will?
no, i think the incompleteness theorem means there’s going to be gaps in anyones self-awareness...and if a decision manages to spring from one of these, it may feel like an arbitrary choice.
That this is able to be seen as “free will” carries on because people DON’T generally understand the halting problem all that well-and so they do not feel like they could possibly be deterministic.
Those who do understand the halting problem...frequently also know a thing or two about quantum mechanics, just enough that they can salvage their belief in free will.
...
I notice that i am still horribly confused, (as manifested by a hundred “missing piece” explanations popping up)...but I also notice I now have a headache.
Free will is basically asking about the cause of our actions and thoughts. The cause of our neurons firing. The cause of how the atoms and quarks in our brains move around.
To know that X causes the atoms in our brain to move a certain way, we’d have to know that every time X happens, the atoms in our brain would move in that specific way. The problem is that we would have to see into the future. We’d have to see what results from X in every future instance of X. We don’t have that information. All we have are our past and current experiences, that we use to induce what will happen in the future. (This is nothing new, just the induction fallacy.)
So, it seems that we can’t determine causes. Maybe somehow if our understanding of physics allows us to deconstruct time and see the future, we might be able to determine causes, but right now we can’t do that, so it seems that we can’t determine causes.
If we can’t determine causes, we can’t know whether or not we have free will.
Let’s consider two possibilities:
1) our “consciousness” causes the atoms in our brain to move in certain ways
2) “physics” causes the atoms to move the way they do
Regardless of whether (1) or (2) is correct, it wouldn’t lead to any different experiences for us. We’d still act and think the way we do, and we’d still psychologically feel like we’re in control of our thoughts and actions. I think this is what Eliezer is saying; that the question of free will is pointless because regardless of what the answer is, it won’t lead us to different experiences.
My objection—just because we don’t know the true cause doesn’t mean we can’t. Knowing the true cause would (at the very least) be interesting. For that reason, I don’t think the question of free will is “meaningless”. I know it doesn’t seem like we could know the true cause, but it’s tough to predict what we might know, say, a million years from now.
Objection to myself: I’m not sure exactly what I mean by consciousness. If “consciousness” doesn’t “mean something”, then the question is basically a matter of physics and what laws of physics govern the movement of the atoms in our brains, which isn’t as interesting, at least to me.
Unfortunately, I’m not sure what it would mean for “consciousness” to be the cause of the atoms in our brains moving. As far as our experiences and ability to measure things goes, it probably doesn’t “mean” anything. I guess that that is the point Eliezer is making.
I’m still notably confused, but I’m definitely getting closer. I would very much appreciate it if anyone could help me understand why it doesn’t mean anything for “consciousness” to cause the atoms in our brains to move.
If we’re pretending that free will is both silly and surprising, then why aren’t we more surprised by stronger biases towards more accurate notions like causality?
If there was no implicit provision like this, there’s no sense to asking any question like “why would brains tend to believe X and not believe not X?” To entertain the question, first we entertain a belief that our brains were “just naïve enough” to allow surprise at finding any sort of cognitive bias. Free will indicates bias—this is the only sense I can interpret from the question you asked.
Obviously, it is irrational to believe strongly either way if no evidence is commonly admitted. Various thought experiments could be made to suggest free will is not among those beliefs we hold by evaluation of log-likelihoods over hypotheses given evidence. And so, if “free will” is significantly favored while also baseless, then a cognitive bias remains one of the better possible explanations for the provisional surprise we claim about observing belief in free will.
At least it is so in my general, grossly naïve understanding. And in lieu of a stack trace, I’ll say this: cognitive biases seem like heuristic simplifications that cause general errors in the calculation of inference. They favor improper scoring when betting expectations in certain contexts. Assuming any reason exists, the motivation is most likely as with over-fitting in any other model—it’s a sampling bias. And, since engineering mistakes into our brain sounds generally harmful, each type of over-fitting must pay off tremendously in some very narrow scope of high risk, high reward opportunities.
The need to reason causally isn’t any more apparent than free will, but it just sounds less mysterious because it fits the language of mathematics. Causality and free will are related, but learning causality seems such a necessary objective to a brain that I doubt we’d get so many other biases without getting causality ensured first. I doubt we’re built without an opinion on either issue.
“Free will” is a black box containing our decision making algorithm.
What kind of mind would invent “free will”? The same mind that would neatly wrap up any other open ended question into a single label, be it “élan vital” or “philogeston”. Our minds are fantastic at dreaming up explanations for things, and if they are not easily empirically testable at the time, then such explanations tend to stick. Without falsifying evidence, our pet theories tend to remain, and confirmation bias slowly hardens them into what feels like brute facts.
It’s appealing because it ties up (or at least hides) loose ends. If we play taboo on
free will
, we might get something likethe concept that people can narrow a number of possible futures into one future that is optimal
. With this definition, free will would indeed exist. If, however,free will
was postulated in such a say as to include some fantastical element, or another black box, Occam’s razor may strike it down. Alternatively, it may be superficially appealing enough to stick, so long as we don’t think about it too thoroughly. For example,the idea that humans are in control of their actions
feels like an explanation, but containscontrol
as a nested black box.But what does this process actually feel like when we make such a mistake? Well, it’s based on implicit assumptions, so nothing feels amiss. You don’t realize that you are making an implicit assumption. All the loose ends look like they are tucked away, at least at a glance. but if you take a closer look, say by repeatedly asking “why?”, then you start to feel less confident. This is a sign of trouble, but you should make sure aren’t just asking “does 1 plus 1 really equal 2″ in a pretentious tone of voice. If repeated self inquiry seems to be creating a rabbit hole of nested black boxes, then you should go back to the highest level box, and try a different form of inquiry. Ask yourself if there’s anything about the nested black boxes that feel wrong. Use all your tools as a rationalist to inquire into this, and hopefully find a path of inquiry besides infinite regression. For a thorough analysis, ask yourself whether the nested boxes make observable predictions, and how those predictions might differ from reality.
With our example of playing taboo on
free will
to getthe idea that humans are in control of their actions
, we might intuit that “control” is just an inherently complex concept. Although usually applying agency during an explanation is a good way of sicking Occam’s razor on it, perhaps this is an exception. We are discussing agency itself, after all. But what does it mean to have agency? What are the observable differences in the world? When we hold these sorts of questions in our minds, and again try to play taboo, we are more likely to get something likethe concept that people can narrow a number of possible futures into one future that is optimal
. That’s a much different answer, because a computer program could also down-select from a number of different options, given some criteria. This answer also doesn’t leave loose ends, and doesn’t leave that nagging feeling of doubt that comes from having left something unexplained. It turns out that all our sense of confusion was contained within the mysticism implied by using the phrase “free will”. It may be easy to forget about that doubt when only taking a broad view, but as soon as you zoom in on the problem it will become detectible. We live in a messy world, and so frequently have to say say “good enough” and leave unexplained doubts due to time constraints, but when we decide that something is important and pursue the little doubting feeling to it’s conclusion, it can be incredibly satisfying. You’ve just answered one of the mysteries of the universe, after all.So that’s what it feels like to make and then correct a mistake, but why and how did we make the mistake in the first place? Well, our minds are naturally wired around concepts of agency. This is an observable fact in others, and as in personal examples; it really does feel like the dice are out to get us, or that “the system” must be consciously malicious rather than merely incompetent. It is even more natural to endow ourselves with that same vague agency we give inanimate objects and bureaucratic systems. It’s only been in the past couple hundred years that humanity has been able to group everything under the same laws of physics. Before that, the stars obeyed their own special rules, and living things were unexplainable mysteries running on “élan vital” instead of something more akin to a combustion reaction. Unless we specifically look close enough at a belief to notice that requires the world to operate under a different set of physical principles, we will default to what is most natural to believe.
As for why the concept of free will should exist in the first place, it is because it is the most natural explanation. It is a fault in out minds that the most natural is not also the simplest, but it is also a useful feature. This form of vague, associative reasoning let’s us jump to reasonably accurate conclusions quickly. The difference between the two breakdowns of
free will
I gave is that one only uses known, well understood phenomena, and the other revolves around making the concept of agency unexplainable. One might also entangle the question with the concept of self and our individual sense of identity. By rolling a bunch of concepts up into one, there are less easily recognizable loose ends. The problem is that tools like associative reasoning and vague definitions aren’t enough to actually arrive at a satisfying answer. It ties up enough loose ends that we declare that we’ve solved it, and move on, ignoring the incompleteness.Note that this in itself isn’t a completely 100% exhaustive expiation of why we naturally want to believe in free will. Where does this concept come from? How do we form it in the first place? To fully answer these, we’d have to also examine the concept if our personal sense of identity, since that is the thing that gets conflated with the “free will” concept to form the more vague and fluffy version. If someone thinks computers don’t or can’t have free will but humans can, this is likely what they mean by “free will”. Our sense of identity is a large issue, and well enough outside the scope of the question that I think it isn’t necessary for this explanation. I’ll leave that particular novel length post for someone else.
I wrote the above before reading any of the comments, but there are a couple other ideas which people touched on but I did not. I’m bringing them together here, mostly for my own future reference:
Humans have the ability to model the outside world in our own minds, including other people, but not our own minds. Because of this, it seems like our choices aren’t subject to causality. Credit, and more detail, here.
Another comment goes into more detail of why this is. In order to fully model itself, a mind would need more power that it has. Therefore, minds cannot predict their own actions with high fidelity. For minds that don’t intuitively understand concepts like recursion, this implies that their own future actions cannot be predicted, and that therefore free will exists.
If we have separate neural hardware for processing human actions and for inanimate events then this might lead to the idea of free will, and then also several other odd notions.
Noise / sound exist independently of observation, at least so long as you subscribe to the idea that there exists an objective reality outside of your own mind. They are pressure waves transmitted through some medium.
The tree makes a sound, which no one hears.
The answer to this seems to be as to the sound example and to most philosophical debates in general:
1)Different categorization patterns, or, simply put, different meanings of a word. In this situation, even two words: people can disagree on what “will” is (in the context of “free”) and on what “free” is (in the context of “will”; let us assume a Frege-Heimian world where if you know the two nodes you always know their combination to ignore the “context” addenda).
2)Politization of the question. In the world where “free is good”, having free will is good. In the world where “determinism is good” and “free will is incompatible with determinism” having free will is bad. And people want to be good. Also, we may want to agree with someone (say, Gandhi) and disagree with someone (say, Fomenko) no matter what they say. Thus affective death spirals and one-sided politics and whatnot.
I think we care about whether or not we have free will because we associate it with accountability—both our own and others.
If someone picks me up and throws me on you, you should not blame me for getting slammed—this is not my fault, and I had no say in the matter. If someone points a gun at me and tells me to hit you, you probably won’t blame for complying. But if you had to rank my accountability in these two cases, it’s obvious that I’m more accountable in the latter because I did have a choice—I could not hit you and get shot. This is a very unfavorable choice and you would not expect me to pick it, so in the global scale of accountability it doesn’t really count as a choice—but if we zoom in to just these two cases it’s more choice than the no choice I got at the former case.
Moving on: if I steal food because I’m hungry, should I be held accountable?
This question is controversial in ethical philosophy, and I won’t form an opinion because the point of this exercise is not to solve such controversies—it’s to understand the cognition behind them. If I don’t eat for long enough, I will die. But unlike the case with the gun, I will not die immediately if I don’t steal this bread right now—so I don’t face immediate death, only hunger. I have more choice here—not enough choice to make me consensually accountable, but enough to push it from consensually unaccountable to controversial.
So, accountability is directly linked to will—the more freely I can use my will, the more accountable I should be for my actions. We want to know if people have free will because we want to know if (or—to what degree) they should be held accountable.
Why do we care about accountability? Because we want to punish and/or reward, but we don’t want to do that based on luck. If you punish me for stealing, but I stole because I’m hungry, then you punish me for being unlucky enough to go hungry—which is morally wrong. But it it “really” was my will to steal, then you are punishing me for stealing—which is morally right.
We care about free will because we don’t want to punish/reward people because of their circumstances—only because of their essences. To discuss the actual difference between circumstances and essences is to answer the question of free will—which is outside the scope of this exercise.
This question never sounded like a meaningful one to me. By the time I first heard it, I was familiar with the understanding of sound as vibrations in the air, so the obvious answer was “yes.”
As Sam Harris points out, the illusion of free will is itself an illusion. It doesn’t actually feel like you have free will if you look closely enough. So then why are we mistaken about things when we don’t examine them closely enough? Seems like a too-open-ended question.
Is the illusion of the illusion of free will also an illusion? Is it a recursive illusion?
That seems unlikely. There is already a certain difficulty in showing that illusion of free will is an illusion. “It seems like you have free will, but actually, it doesn’t seem.”—The seeming is self-evident, so what does it mean to say that something actually doesn’t seem if it feels like it seems. As far as I understand it, it’s not like it doesn’t really seem so, but you’re mistaken about it and think that it actually seems so, and then mindfulness meditation clears up that mistake for you and you stop thinking that it seems that you have free will. Instead, you observe that seeming itself just disappears. It stops seeming that you have free will.
So now we come to your suggestion: “It seems(level 2.) like the seeming(lvl 1.) disappears, but actually, it doesn’t seem(lvl 2.) like the seeming(lvl 1.) disappears.”—but once again, the seeming(lvl 2.) is self-evident. So you’d need to come up with some extraordinary circumstances which are associated with more mental clarity to show that that seeming(lvl 2.) also disappears. But this is unlikely, because the concept of free will is already incoherent, so more mental clarity shouldn’t point you towards it.
Three things bother me here, and they’re all about which questions are being asked.
The “tree falling in a forest” questions isn’t, as far as I’ve encountered it outside of this blog, about the definition of sound. Rather, it’s about whether or not reality behaves the same when you do not observe it, an issue that you casually dismissed, without any proof, evidence, or even argument. There are ways to settle this dispute partially, though they are not entirely empirical due to the nature of the conundrum.
Ignoring the question of free will, ill defined as it may be, is merely -pretending to be wise-. You’re basically saying you now know not to ask these questions, without explaining why (at least here). If there are any convincing arguments that settle a well-defined notion of free will, I welcome them.
Last but not least, I’m bothered by the choice of a question to settle all arguments—just write the mental processes that lead to the argument? Why stop there? Why not map the specific clusters of neurons and synapses activating the argument and reinforced by it? Having written down this stack of processes, can you perform neurosurgery that will stop this pattern of thinking (but not unrelated ones)? In Science, there may be such a thing as “being done”, but this isn’t it. Not by a longshot.