Mysterious Answers to Mysterious Questions
Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You’ve learned some anatomy from dissection, so you know your hand contains muscles; but you don’t know why muscles move instead of lying there like clay. Your hand is just . . . stuff . . . and for some reason it moves under your direction. Is this not magic?
It seemed to me then, and it still seems to me, most probable that the animal body does not act as a thermodynamic engine . . . The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concourse of atoms[.]1
[C]onsciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears, therefore, that animated creatures have the power of immediately applying, to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce desired mechanical effects.2
Modern biologists are coming once more to a firm acceptance of something beyond mere gravitational, chemical, and physical forces; and that unknown thing is a vital principle.3
—Lord Kelvin
This was the theory of vitalism ; that the mysterious difference between living matter and non-living matter was explained by an Élan vital or vis vitalis. Élan vital infused living matter and caused it to move as consciously directed. Élan vital participated in chemical transformations which no mere non-living particles could undergo—Wöhler’s later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere chemistry could duplicate a product of biology.
Calling “Élan vital” an explanation, even a fake explanation like phlogiston, is probably giving it too much credit. It functioned primarily as a curiosity-stopper. You said “Why?” and the answer was “Élan vital!”
When you say “Élan vital!” it feels like you know why your hand moves. You have a little causal diagram in your head that says:
But actually you know nothing you didn’t know before. You don’t know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won’t be able to predict it in advance. Your curiosity feels sated, but it hasn’t been fed. Since you can say “Why? Élan vital!” to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, et cetera.
But the greater lesson lies in the vitalists’ reverence for the Élan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking.
The Secret of Life was infinitely beyond the reach of science! Not just a little beyond, mind you, but infinitely beyond! Lord Kelvin sure did get a tremendous emotional kick out of not knowing something.
But ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious is to worship your own ignorance.
Vitalism shared with phlogiston the error of encapsulating the mystery as a substance. Fire was mysterious, and the phlogiston theory encapsulated the mystery in a mysterious substance called “phlogiston.” Life was a sacred mystery, and vitalism encapsulated the sacred mystery in a mysterious substance called “Élan vital.” Neither answer helped concentrate the model’s probability density—helped make some outcomes easier to explain than others. The “explanation” just wrapped up the question as a small, hard, opaque black ball.
In a comedy written by Molière, a physician explains the power of a soporific by saying that it contains a “dormitive potency.” Same principle. It is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.
But the deeper failure is supposing that an answer can be mysterious. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. In doing so, they mixed up the map with the territory. All confusion and bewilderment exist in the mind, not in encapsulated substances.
This is the ultimate and fully general explanation for why, again and again in humanity’s history, people are shocked to discover that an incredibly mysterious question has a non-mysterious answer. Mystery is a property of questions, not answers.
Therefore I call theories such as vitalism mysterious answers to mysterious questions.
These are the signs of mysterious answers to mysterious questions:
First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to cause this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity.
Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.
1 Lord Kelvin, “On the Dissipation of Energy: Geology and General Physics,” in Popular Lectures and Addresses, vol. ii (London: Macmillan, 1894).
2 Lord Kelvin, “On the Mechanical action of Heat or Light: On the Power of Animated Creatures over Matter: On the Sources available to Man for the production of Mechanical Effect,” Proceedings of the Royal Society of Edinburgh 3, no. 1 (1852): 108–113.
3 Silvanus Phillips Thompson, The Life of Lord Kelvin (American Mathematical Society, 2005).
- The Lens That Sees Its Flaws by 23 Sep 2007 0:10 UTC; 352 points) (
- Are we in an AI overhang? by 27 Jul 2020 12:48 UTC; 264 points) (
- Learned Blankness by 18 Apr 2011 18:55 UTC; 260 points) (
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- Consciousness as a conflationary alliance term for intrinsically valued internal experiences by 10 Jul 2023 8:09 UTC; 209 points) (
- Giant (In)scrutable Matrices: (Maybe) the Best of All Possible Worlds by 4 Apr 2023 17:39 UTC; 208 points) (
- Gears in understanding by 12 May 2017 0:36 UTC; 194 points) (
- Joy in the Merely Real by 20 Mar 2008 6:18 UTC; 172 points) (
- The Gift We Give To Tomorrow by 17 Jul 2008 6:07 UTC; 151 points) (
- Atheism = Untheism + Antitheism by 1 Jul 2009 2:19 UTC; 138 points) (
- Philosophy in the Darkest Timeline: Basics of the Evolution of Meaning by 7 Jun 2020 7:52 UTC; 132 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 120 points) (
- In praise of fake frameworks by 11 Jul 2017 2:12 UTC; 117 points) (
- The Fountain of Health: a First Principles Guide to Rejuvenation by 7 Jan 2023 18:34 UTC; 115 points) (
- On Doing the Impossible by 6 Oct 2008 15:13 UTC; 115 points) (
- Book Review: The Structure Of Scientific Revolutions by 9 Jan 2019 7:10 UTC; 104 points) (
- Interpreting Yudkowsky on Deep vs Shallow Knowledge by 5 Dec 2021 17:32 UTC; 100 points) (
- Zombies Redacted by 2 Jul 2016 20:16 UTC; 94 points) (
- No One Knows What Science Doesn’t Know by 25 Oct 2007 23:47 UTC; 92 points) (
- Artificial Addition by 20 Nov 2007 7:58 UTC; 90 points) (
- The Sheer Folly of Callow Youth by 19 Sep 2008 1:30 UTC; 88 points) (
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- Wrong Questions by 8 Mar 2008 17:11 UTC; 81 points) (
- Excluding the Supernatural by 12 Sep 2008 0:12 UTC; 80 points) (
- The Steampunk Aesthetic by 8 Mar 2018 9:06 UTC; 73 points) (
- Fake Optimization Criteria by 10 Nov 2007 0:10 UTC; 73 points) (
- Costs and Benefits of Scholarship by 22 Mar 2011 2:19 UTC; 72 points) (
- Raised in Technophilia by 17 Sep 2008 2:06 UTC; 67 points) (
- Do Scientists Already Know This Stuff? by 17 May 2008 2:25 UTC; 66 points) (
- Chance is in the Map, not the Territory by 13 Jan 2025 19:17 UTC; 66 points) (
- Nonperson Predicates by 27 Dec 2008 1:47 UTC; 66 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 63 points) (
- “Brain enthusiasts” in AI Safety by 18 Jun 2022 9:59 UTC; 63 points) (
- Feel the Meaning by 13 Feb 2008 1:01 UTC; 61 points) (
- When Anthropomorphism Became Stupid by 16 Aug 2008 23:43 UTC; 56 points) (
- Quantum Non-Realism by 8 May 2008 5:27 UTC; 55 points) (
- 14 Nov 2021 16:38 UTC; 51 points) 's comment on What would we do if alignment were futile? by (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- Passing the Recursive Buck by 16 Jun 2008 4:50 UTC; 49 points) (
- Unteachable Excellence by 2 Mar 2009 15:33 UTC; 47 points) (
- Living By Your Own Strength by 22 Dec 2008 0:37 UTC; 45 points) (
- Dreams of AI Design by 27 Aug 2008 4:04 UTC; 41 points) (
- Where Experience Confuses Physicists by 26 Apr 2008 5:05 UTC; 41 points) (
- Is InstructGPT Following Instructions in Other Languages Surprising? by 13 Feb 2023 23:26 UTC; 39 points) (
- Three Fallacies of Teleology by 25 Aug 2008 22:27 UTC; 38 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- The Born Probabilities by 1 May 2008 5:50 UTC; 37 points) (
- Grasping Slippery Things by 17 Jun 2008 2:04 UTC; 36 points) (
- A Kick in the Rationals: What hurts you in your LessWrong Parts? by 25 Apr 2012 12:12 UTC; 36 points) (
- The Reasonable Effectiveness of Mathematics or: AI vs sandwiches by 14 Feb 2020 18:46 UTC; 34 points) (
- In Praise of Maximizing – With Some Caveats by 15 Mar 2015 19:40 UTC; 32 points) (
- Discussion: Socially Awkward Penguin as a tool for unraveling social enigmas by 17 Jun 2011 0:52 UTC; 31 points) (
- Why I’m Staying On Bloggingheads.tv by 7 Sep 2009 20:15 UTC; 31 points) (
- My Time As A Goddess by 4 Jul 2023 13:14 UTC; 30 points) (
- 27 Feb 2009 20:32 UTC; 28 points) 's comment on The Most Important Thing You Learned by (
- Intuitive cooperation by 25 Jul 2014 1:48 UTC; 27 points) (
- Setting Up Metaethics by 28 Jul 2008 2:25 UTC; 27 points) (
- A Premature Word on AI by 31 May 2008 17:48 UTC; 27 points) (
- Building Something Smarter by 2 Nov 2008 17:00 UTC; 26 points) (
- Timeless Beauty by 28 May 2008 4:32 UTC; 24 points) (
- Without models by 4 May 2009 11:31 UTC; 23 points) (
- Algorithms of Deception! by 19 Oct 2019 18:04 UTC; 23 points) (
- Smart non-reductionists, philosophical vs. engineering mindsets, and religion by 4 Aug 2012 10:48 UTC; 22 points) (
- 30 Mar 2008 7:10 UTC; 22 points) 's comment on Hand vs. Fingers by (
- “Arbitrary” by 12 Aug 2008 17:55 UTC; 19 points) (
- The Power to Understand “God” by 12 Sep 2019 18:38 UTC; 18 points) (
- The Conscious Sorites Paradox by 28 Apr 2008 2:58 UTC; 18 points) (
- 11 Oct 2011 3:01 UTC; 18 points) 's comment on How to understand people better by (
- 3 Nov 2022 11:31 UTC; 17 points) 's comment on Humans do acausal coordination all the time by (
- Reinforcement Learning: A Non-Standard Introduction (Part 2) by 2 Aug 2012 8:17 UTC; 16 points) (
- Rationality Reading Group: Part D: Mysterious Answers by 2 Jul 2015 1:55 UTC; 15 points) (
- Continental Philosophy as Undergraduate Mathematics by 26 Apr 2022 8:05 UTC; 15 points) (
- 6 Jul 2009 20:02 UTC; 14 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- Lessons from Failed Attempts to Model Sleeping Beauty Problem by 20 Feb 2024 6:43 UTC; 13 points) (
- What do we *really* expect from a well-aligned AI? by 4 Jan 2021 20:57 UTC; 13 points) (
- 6 May 2022 6:33 UTC; 12 points) 's comment on Salvage Epistemology by (
- 25 Jul 2011 23:36 UTC; 12 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 11 Sep 2008 19:34 UTC; 11 points) 's comment on Original Seeing by (
- 15 Jun 2010 4:23 UTC; 11 points) 's comment on Open Thread June 2010, Part 3 by (
- 22 Jan 2021 14:34 UTC; 11 points) 's comment on Reflections on “Psycho-Pass” by (
- 10 May 2010 21:41 UTC; 10 points) 's comment on Rationality quotes: May 2010 by (
- Understandable vs justifiable vs useful by 28 Feb 2020 7:43 UTC; 10 points) (
- 12 Sep 2013 14:36 UTC; 9 points) 's comment on A concise version of “Twelve Virtues of Rationality”, with Anki deck by (
- My Failed Situation/Action Belief System by 2 Feb 2010 18:56 UTC; 9 points) (
- 14 Jan 2025 21:27 UTC; 9 points) 's comment on Chance is in the Map, not the Territory by (
- 9 Jul 2021 7:27 UTC; 8 points) 's comment on Agency in Conway’s Game of Life by (
- 14 Feb 2010 20:56 UTC; 8 points) 's comment on Open Thread: February 2010 by (
- 27 Oct 2019 23:08 UTC; 8 points) 's comment on A simple sketch of how realism became unpopular by (
- 13 Feb 2011 8:01 UTC; 8 points) 's comment on Bridging Inferential Gaps and Explaining Rationality to Other People by (
- The Reality of Emergence by 19 Aug 2017 21:58 UTC; 8 points) (
- The Perception-Action Cycle by 23 Jul 2012 1:44 UTC; 8 points) (
- 31 Jan 2010 8:11 UTC; 8 points) 's comment on Deontology for Consequentialists by (
- 11 Jul 2010 12:46 UTC; 7 points) 's comment on A proposal for a cryogenic grave for cryonics by (
- Method of statements: an alternative to taboo by 16 Nov 2022 10:57 UTC; 7 points) (
- 16 Aug 2015 10:57 UTC; 7 points) 's comment on You Are A Brain—Intro to LW/Rationality Concepts [Video & Slides] by (
- 21 Nov 2021 16:20 UTC; 7 points) 's comment on Ngo and Yudkowsky on AI capability gains by (
- Lighthaven Sequences Reading Group #19 (Tuesday 01/28) by 26 Jan 2025 0:02 UTC; 7 points) (
- Cambridge LW: Rationality Practice: The Map is Not the Territory by 14 Mar 2023 17:56 UTC; 6 points) (
- [SEQ RERUN] Mysterious Answers to Mysterious Questions by 3 Aug 2011 4:54 UTC; 6 points) (
- Does Math actually create or is it simply an attribute? by 10 Dec 2020 22:01 UTC; 6 points) (
- Rationality Book Club: Week 3 by 17 Jan 2024 4:15 UTC; 5 points) (EA Forum;
- 24 Mar 2010 15:11 UTC; 5 points) 's comment on There just has to be something more, you know? by (
- 21 Nov 2021 0:05 UTC; 5 points) 's comment on Ngo and Yudkowsky on AI capability gains by (
- 7 Apr 2011 16:51 UTC; 5 points) 's comment on Autism and Lesswrong by (
- 23 Jan 2018 13:04 UTC; 5 points) 's comment on One-Consciousness Universe by (
- Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) by 17 Nov 2014 17:06 UTC; 4 points) (
- 13 May 2012 11:48 UTC; 4 points) 's comment on Holden Karnofsky’s Singularity Institute Objection 1 by (
- 24 Aug 2013 13:48 UTC; 4 points) 's comment on How sure are you that brain emulations would be conscious? by (
- 21 Jul 2016 6:06 UTC; 4 points) 's comment on Zombies Redacted by (
- 26 Mar 2009 0:16 UTC; 4 points) 's comment on Spock’s Dirty Little Secret by (
- What if human reasoning is anti-inductive? by 11 Oct 2022 5:15 UTC; 4 points) (
- 17 Feb 2013 14:29 UTC; 3 points) 's comment on Einstein’s Superpowers by (
- 7 Apr 2011 14:44 UTC; 3 points) 's comment on Popperian Decision making by (
- 11 Apr 2010 19:32 UTC; 3 points) 's comment on Rationality Quotes: February 2010 by (
- 26 May 2022 18:40 UTC; 2 points) 's comment on St. Petersburg Demon – a thought experiment that makes me doubt Longtermism by (EA Forum;
- 28 May 2012 18:35 UTC; 2 points) 's comment on What is the best programming language? by (
- 23 Jul 2010 15:29 UTC; 2 points) 's comment on Book Review: The Root of Thought by (
- 16 Mar 2024 0:58 UTC; 2 points) 's comment on Are AIs conscious? It might depend by (
- 7 Jan 2012 9:54 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 2 Mar 2015 0:25 UTC; 2 points) 's comment on Dissolving the Question by (
- 2 Jul 2009 22:21 UTC; 2 points) 's comment on Atheism = Untheism + Antitheism by (
- Meetup : Seattle Sequences group: Mysterious Answers 4 by 13 Mar 2015 7:35 UTC; 2 points) (
- 19 Jan 2025 3:41 UTC; 2 points) 's comment on Deontic Explorations In “Paying To Talk To Slaves” by (
- 15 Mar 2010 5:05 UTC; 2 points) 's comment on Privileging the Hypothesis by (
- 23 Mar 2009 17:37 UTC; 2 points) 's comment on Playing Video Games In Shuffle Mode by (
- 6 Feb 2012 23:13 UTC; 2 points) 's comment on Rationality Quotes February 2012 by (
- 30 Sep 2011 20:02 UTC; 2 points) 's comment on Examples of mysteries explained *away* by (
- Part 1 On What is a Self Discussion by 8 Aug 2011 9:55 UTC; 2 points) (
- 15 Dec 2014 5:52 UTC; 1 point) 's comment on A forum for researchers to publicly discuss safety issues in advanced AI by (
- 16 Apr 2009 16:59 UTC; 1 point) 's comment on Welcome to Less Wrong! by (
- 28 Feb 2015 22:16 UTC; 1 point) 's comment on [Link] Algorithm aversion by (
- 26 Jan 2018 17:59 UTC; 1 point) 's comment on Shell, Shield, Staff by (
- 18 Jan 2010 19:31 UTC; 1 point) 's comment on The Wannabe Rational by (
- 7 Dec 2008 20:34 UTC; 1 point) 's comment on Artificial Mysterious Intelligence by (
- 2 Mar 2019 16:03 UTC; 1 point) 's comment on Sages in singularity by (
- 16 Feb 2020 15:03 UTC; 1 point) 's comment on The Reasonable Effectiveness of Mathematics or: AI vs sandwiches by (
- 29 Mar 2011 21:47 UTC; 1 point) 's comment on Rationality Quotes: March 2011 by (
- 7 Mar 2010 1:56 UTC; 0 points) 's comment on Open Thread: March 2010 by (
- 5 Nov 2011 13:10 UTC; 0 points) 's comment on Human consciousness as a tractable scientific problem by (
- The Reality of Emergence by 4 Oct 2017 8:11 UTC; 0 points) (
- 13 Jun 2011 22:47 UTC; 0 points) 's comment on A Defense of Naive Metaethics by (
- 24 Dec 2011 3:59 UTC; 0 points) 's comment on “Trials and Errors: Why Science Is Failing Us” by (
- Probabilistic reasoning for description and experience by 27 Sep 2022 10:57 UTC; 0 points) (
- 17 Apr 2009 0:43 UTC; 0 points) 's comment on Instrumental Rationality is a Chimera by (
- 11 Jul 2014 17:54 UTC; -4 points) 's comment on Confused as to usefulness of ‘consciousness’ as a concept by (
- 1 Aug 2009 23:24 UTC; -4 points) 's comment on The Generalized Anti-Zombie Principle by (
- Discussion: Counterintuitive ways of teaching knowledge by 8 Jul 2011 21:02 UTC; -7 points) (
- Two Dogmas of LessWrong by 15 Dec 2022 17:56 UTC; -7 points) (
- The Futility of Intelligence by 15 Mar 2012 14:25 UTC; -10 points) (
Nitpick:
But Kelvin (in your quote) qualified it with ”… hitherto entered on”. Whether or not “infinitely” is fitting, doesn’t this imply that Kelvin did not think that future scientific inquiry could not succeed?
(a) not when you say “infinitely”
(b) “Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms”
Kelvin was smart enough to hedge his bets?
Could it not also have been partly due to earlier scientists underestimating the degree to which qualitative phenomena derive from quantitative phenomena? Their error, then, was in tending to assume this quality was immune to study, rather than in assuming the quality itself.
Since you can say “Why? Elan vital!” to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, etcetera.
But you say earlier ‘Elan vital’ was greatly weakened by a piece of evidence. In that light, it’s hypothesis could be stated “the mechanisms of living processes are of a different kind than the mechanisms of non-living processes, so you will not be able to study them with chemistry”. This is false, but I don’t think it’s entirely worthless as a hypothesis, since biochemistry is noticeably different from non-living chemistry.
I think ‘elan vital’ makes some sense, even in a modern light. Most of the reactions in our body would not occur without enzymes, and enzymes are a characteristic feature of life. So perhaps we can say that ‘elan vital’ is enzymes! There is at least one experiment I can think of that could have been interpreted to show this too: I believe it involved fermentation being carried out with yeast-water (no living yeast, but clearly having their enzymes).
I like your list of signs of a curiosity stopper. I don’t necessarily think that “elan vital” meets those requirements (as Roy points out), but perhaps it did for many people or at some times.
I like the list because my brain feels a little more limber and a little more powerful, having pondered it. The list is a curiosity ENHANCER, and an anticipation SHARPENER.
-- James
But you say earlier ‘Elan vital’ was greatly weakened by a piece of evidence
Heh. A fair point! Every mysterianism, though it may fail to predict details and quantities, is ultimately vulnerable to the one experience in all the world that it does prohibit—the discovery of a non-mysterious explanation.
Is that true? Does the discovery of a non-mysterious explanation serve as negative evidence of a mysterious one? If by “mysterious” we mean “not strongly predicting any outcomes” then when a theory that is strongly predictive is discovered, the available evidence will shift probability from the mysterious explanation to the predictive one. And I do think that’s what you mean by “mysterious”, so that’s a good observation.
However, I’m not sure that this is the case with Elan vital. I’m not sure that Elan vital was weakened by virtue of being nonpredictive and thereby losing some of its credibility to a more predictive explanation. It may also be the case that Elan vital predicted something which was contradicted by the synthesis of urea. If this was the case, then it is inaccurate to say that Elan vital served only as a curiosity-stopper or was a fake explanation. It would, in fact, be a perfectly fine explanation.
So what was the theory of Elan vital, and did it prohibit urea synthesis? The theory was that the matter of living bodies has a property that non-living bodies can’t have. This is clearly a predictive theory. It prohibits any living matter from being identical to non-living matter because if two things are identical then there aren’t any properties that one has that the other doesn’t.
The synthesis of urea showed that at least one type of biological matter could be produced using purely mechanical means. Is this evidence that the synthetic urea and the natural urea are identical? No. One could postulate that there was a non-material difference between them. However, this would make that difference impossible to observe, and thus make the theory nonpredictive.
According to your post:
The fact that this synthesis served as negative evidence implies that the proponents of Elan vital believed that the difference between biological matter and non-biological matter would be observable. Therefore, the theory of Elan vital, as portrayed in your post, was predictive. Admittedly, not very predictive, but also not merely a curiosity stopper. Rather, it was a sufficiently vague theory to reflect the lack of evidence that they had at the time.
On the other hand, it’s a self-contradictory theory. If they thought the difference would be observable, they must have also thought that it was physical. Yet they thought that the processes of life could not be explained with physical interactions. Yet they clearly thought that those processes were caused by the physical structure of biological matter. How can something be caused entirely by physical matter but not be explainable by a physical explanation?
In conclusion, there are valid criticisms of Elan vital, as portrayed by your post, but not the ones you leveled against it.
These are the signs of mysterious answers to mysterious questions: Anothe good sign is that the mysterious answer is always in retreat. Suddenly, people explain some phenonmena, previously thought to be explainable only by “elan vitale” or “god” or “the influence of platonic Ideals”. And the mysterious answer retreats to a smaller realm. And that realm just keeps on shrinking...
This post reads rather like a pastiche of Dan Dennett (on consciousness and free will).
And to continue the thread of Roy’s comment as picked up by Eliezer, it might have been a fairly reasonable conjecture at the time (or at some earlier time). We have to be wary about hindsight bias. Imagine a time before biochemistry and before evolution theory. The only physicalist “explanations” you’ve ever heard of or thought of for why animals exist and how they function are obvious non-starters...
You think to yourself, “the folks who are tempted by such explanations just don’t realize how far away they are from really explaining this stuff; they are deluded.” And invoking an elan vital, while clearly not providing a complete explanation, at least creates a placeholder. Perhaps it might be possible to discover different versions of the elan vital; perhaps we could discover how this force interacts with other non-material substances such as ancestor spirits, consciousness, magic, demons, angels etc. Perhaps there could be a whole science of the psychic and the occult, or maybe a new branch of theological inquiry that would illuminate these issues. Maybe those faraway wisemen that we’ve heard about know something about these matters that we don’t know. Or maybe the human mind is simply not equipped to understand these inner workings of the world, and we have to pray instead for illumination. In the afterlife, perhaps, it will all be clear. Either way, that guy who thinks he will discover the mysteries of the soul by dissecting the pineal gland seem curiously obtuse in not appreciating the magnitude of the mystery.
Now, in retrospect we know what worked and what didn’t. But the mystics, it seems, could have turned out to have been right, and it is not obvious that they were irrational to favor the mystic hypothesis given the evidence available to them at the time.
Perhaps what we should be looking for is not structural problems intrinsic to certain kinds of questions and answers, but rather attitude problems that occur, for example, when ask questions without really caring about finding the answer, or when we use mysterious answers to lullaby our curiosity prematurely.
We don’t need to imagine. We are in exactly this position with respect to consciousness.
People with the benefit of hindsight failing to realize how reasonable vitalism sounded at the time is precisely why they go ahead and propose similar explanations for consciousness, which seems far more mysterious to them than biology, hence legitimately in need of a mysterious explanation. Vitalists were merely stupid, to make such a big deal out of such an ordinary-seeming phenomenon as biology—consciousness is different.
This is precisely one of the ways in which I went astray when I was still a diligent practitioner of mere Traditional Rationality, rather than Bayescraft. The reason to consider how reasonable mistakes seemed without benefit of hindsight, is not to excuse them, because this is to fail to learn from them. The reason to consider how reasonable it seemed is to realize that not everything that sounds reasonable is a good idea; you’ve got to be strict about things like yielding increases in predictive power.
Do you have something on the difference between Traditional Rationality and Bayescraft?
I am finally taking Prob. & Stats next semester (and have not yet looked at the book to see how Bayes figures into it yet. I am going to be pissed if it doesn’t enter into the class at this point), so I figure that I will get my formal introduction to Bayes at that point. However, I do know the Basic P(A|B) = [ P(B|A) P(A) ] / P(B).
And, I can regurgitate Wikipedia’s entries on Bayes, yet I don’t seem to have any real context into which I can place the difference between Bayes and traditional Probability distributions… Can you help, please?
Never let the official curriculum slow you down! But still approach things systematically, find yourself a textbook.
I am currently taking Stats(AP class in the USA, IB level elsewhere), and hope that I can help.
A traditional probability test will take four frequencies(Male smokers, female smokers, male nonsmokers, and female nonsmokers) and tell you if there is a correlation with an X^2 test.
Bayescraft lets you use gender as a way to predict the likelihood of smoking, or use smoking to predict gender. The fundamental difference, as far as I can tell, is that Statistics takes results about samples and applies them to populations. Bayescraft takes results about priors and applies them to the future. The two use similar methodology to address fundamentally different questions.
Eliezer: It doesn’t seem to me that you really engaged with Nick’s point here. Also, I have pointed out to you before that there were lots of philosophers who believed that consciousness was unique and mysterious but life was not long before science rejected vitalism.
The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms… Modern biologists were coming once more to the acceptance of something and that was a vital principle.
Given what we know now about the vastly complex and highly improbable processes and structures of organisms—what we have learned since Lord Kelvin about nucleic acids, proteins, evolution, embryology, and so on—and given that there are many mysteries still, such as consciousness and aging, or how to cure or prevent viruses, cancers, or heart disease, for which we still have far too few clues -- this rather metaphorical and poetic view of Lord Kelvin’s is certainly a far more accurate view of the organism, for the time, than any alternative model that posited that the many details and functions of human body, or its origins, could be most accurately modeled by simple equations like those used for Newtonian physics. To the extent vitalism detered biologists from such idiocy vitalism must be considered for its time a triumph. Too bad there were to few similarly good metaphors to deter people from believing in central economic planning or Marx’s “Laws of History.”
Admittedly, the “infinetely different” part is hyperbole, but “vastly different” would have turned out to be fairly accurate.
Is it better to say “The problem is too big, lets just give up” or “The problem is too big for me, but I can start with X and find out how that works”?
It seems to me Lord Kelvin was saying the former, while Wohler clearly believed the latter, and proved it by synthesizing urea.
Did Wohler understand the intricacies of biology? No, of course not, but he proved they could be discovered, which is exactly what Kelvin was saying could not be done. After almost 200 years we still aren’t done, but we do know a whole lot about the intricacies of biology, and we have a rough idea of how much farther we need to go to understand all of it. Furthermore, we understand that while biology is incredibly complex, it follows the same rules that govern the “fortuitous concurrence of atoms” as Kelvin put it.
Kelvin was plain wrong, and worse, his whole point was to discourage further research into biology. He was one of the people who said it could not be done, while Wohler just went ahead and did it.
I don’t really see the problem here. the causal link from mind to body is not very much more understood today as it was in Lord Kelvins time. To propose that there is a life force involved may not have empirical basis but can be validated by personal experience. And to suggest that there can be nothing more than enzymes, nerve signals, molecules floating around etc. is perhaps also failure to admit ones own ignorance.
Secondly it seems to me that there is a philosophical/existential way of seeing things that is different from the more dry scientific point of view that one usually finds on this blog.
″ If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. ”
The problem with this statement is that reality is relative and our understanding of it depends on the our limited ways of knowing about it. So to state that something is a mystery or is unknown might also be a recognition of this limitation. Something at one level of understanding might be something different on another level.
But of course this should not prevent us from trying to find out what is hiding behind the mystery.
I just read “The Profession of Faith of the Savoyard Vicar” in Emile by Jean Jacques Rousseau. He’s responsible for intelligent design (that annoying “who made this watch?” story), and an early “who caused the big bang? GOD did!” argument. I think this falls into your “mysterious answer” category. Positing a supernatural being doesn’t really answer anything, it just moves the mystery into a new, man-made construct.
I don’t think “elan vital” needed to be a curiousity stopper. It could be a description.
Some things are alive. Some are not. Live things are different, they do things that dead things do not. It’s a difference that’s worth noticing. If “elan vital” is a synonym for “alive” and not an explanation, then it’s useful. It doesn’t have to stop you from asking what the difference is.
Urea is not alive. That was a red herring. But it suggested a new idea, one that will probably be realised someday soon. In theory there’s nothing about cells that we can’t understand in detail. Probably within 50 years we’ll be able to create a living cell from nonliving components. If not 50 years, certainly within 200 years. We’re very close.
Not 50 years. Craig Venter did it already in 2010. So it took 3 years to do what you thought it would take 50.
He didn’t actually synthesize a whole living thing. He synthesized a genome and put it into a cell. There’s still a lot of chemical machinery we don’t understand yet.
I think Kelvin gets a bit of a raw deal in the way people often quote him: “[life etc.] is infinitely beyond the range of any scientific inquiry”.
By cutting off the quote there it sounds like he is claiming that science will never be able to understand life. However, as you show above, he continues with, ”… hitherto entered on.” Thus, the sentence is making a claim about the power of science up to the time of his writing to understand life. This is a far more reasonable claim.
I am wondering what kind of force it is that causes ones shoes to come off during a forceful impact.
I think you have overlooked the possibility of non rational knowledge. Maybe science is limited to the rational, empirical search for casuality, but there is meaning beyond this specific mode of cognition. This is to say, I don’t think a exception of reductionism is necessarely admition of mystery. It may be acceptance of thought independent of matter, or, simply put, to believe that the mind comes before the material universe. Once again the old egg-chiken problem. You don’t need a linear solution. A circular causality, where cause and effect are not ontologically absolute, but may adapt to the circumstances or points of view. Just as you mentioned, drawing diagrams of [cause]->[effect] do not amount to learning, for it has no influence on what you know. It has utility only if this takes place in a ‘thingspace’ where a network of ideas models the experience of the mind. In this case, those diagrams may operate as little brain apps, allowing for coherent behavior. This is rationality. Search for truth and beauty is not confined by reason. I think this is the whole point of doubting pure reductionism, not arguing for mystery and cherishing ignorance, as your words imply.
1) Great post and great comments.
2) Like a few people have mentioned, using a life force as an explanation isn’t necessarily a bad thing. It depends what you have in mind. You could believe in the life force but not be breaking any of the four curiosity stoppers. It would be interesting to know how many people used life force as a curiosity stopper when it was popular. I would guess that most people did use it as a curiosity stopper. Sounds like a good job for those experimental philosophers to show they do more than just polls about intuitions.
3) “You have a little causal diagram in your head that says [“Elan vital!”] → [hand moves]. But actually you know nothing you didn’t know before. You don’t know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won’t be able to predict it in advance.”
I disagree that you know nothing more than you did before. When I think of a life force I picture different things than, say, electrical force. Maybe your concept of life hasn’t substantially changed, but it has been enriched slightly, and the more you enrich a concept the more falsifiable it becomes. I would argue that the more falsifiable a concept is, without being shown to be false, the more useful it is (in general). For instance, if I said meaning was holistic, I think this is somewhat analogous to saying motion in the living is generated by a life force. It loosely constrains other things you can believe about meaning or life.
Phlogiston exists. We call it “absence of oxygen”. Nobody acted like positive charge wasn’t real when they found out it was the absence of electrons.
This is a wrong reification in so many specific cases...
Do you mean that a bottle full of Nitrogen would be Phogiston, in the same way that a Hyrodgen Ion⁺ is a proton(Absence of an electron)?
Why is the “Absence of Oxygen” Phlogiston in this case?
Nitrogen would be phlogiston-saturated air, in which nothing would burn. Coal would be full of phlogiston and burn easily in any air that isn’t phlogiston-saturated.
I went and read up on Phlogiston a little bit, and this makes sense to me now. The Nitrogen (absence of Oxygen) is a good analogy for what is a very weird theory (Phlogiston—I can see why Steam-Punks are so drawn to this esoteric and wildly insane theory—and I can see why at the time it made sense to Stahl even though it was wildly wrong… The terminology tends to sound really ludicrous: Phlogisticated or Dephlogisticated… Uh, huh...)
I can now see where my analogy with the proton is off, as well.
Acknowledging, of course, that the nomenclature we are considering is among the most ridiculed of historical attempts of scientific explanation I don’t think the analogy would call the absence of electron a proton. A proton is a specific particle that has a positive charge but not all positive charges are considered to be protons (even if protons are usually involved somewhere underneath in conventional matter).
This seems like a really important point. Even this seemingly non-rational explanation pointed to important intuitions, that could later be implemented in the map that is “science”. However, before that, its’ not like the cataloging of these intuitions and attempting to label them held no information about the world.
“But ignorance exists in the map, not in the territory.”
“To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.”
“Every mysterianism, though it may fail to predict details and quantities, is ultimately vulnerable to the one experience in all the world that it does prohibit—the discovery of a non-mysterious explanation.”
Wow! Certainly no shortage of great insights and quotable aphorisms in this posting and commentary. Yet I will still claim that mysterious answers can be rendered not just harmless, but positively helpful if you exploit them properly. How to exploit them? Well, first you recognize them by the suggested 4 signs. Then you turn around sign #1 and treat them as a curiosity-magnet rather than curiosity-stopper. Then you turn around sign #2 by empirically investigating to find the moving parts. (Like the folks around Delbruck who thought that investigating life would reveal new laws of physics.) And finally, avoid falling into the traps implicit in signs #3 and #4.
Am I the only one who, while reading this post, thought “why doesn’t the same apply to anything else we ever discover”?
Elan vital (and phlogiston and luminiferous aether etc.) were particles/substances/phenomena postulated to try to explain observations made. How are quarks, electrons and photons any different? Just because we recognise these as the best available theory today, I am not sure I understand how one is a curiosity-stopper any more than the other.
The real curiosity-stopper is the suggestion that something is forever beyond our understanding and that attempting to research it is destined to be futile. Your quote from Lord Kelvin exhibits this mentality, but only very slightly. Certainly a lot less than some of that stuff you hear from religious people who think God explains everything but is beyond our understanding. I think the history of science shows that this mentality is continually diminishing, and Lord Kelvin’s quote may simply be a transitional fossil.
I still see traces of this mentality today. Ask a cosmologist what happened in the first few seconds after the big bang and they might say the particle horizon makes it fundamentally impossible to see beyond the point where the universe became optically transparent. I think many people think similarly about consciousness — not because they think we can’t dissect the brain and figure out how it works, but rather because they think we will never be able to come up wtih a coherent, useful definition of the term that reasonably matches our intuition. I think each of these are curiosity-stoppers.
The difference between electrons and elan vital is that the former come with equations that let you predict things. If you said “electricity is electrons” that would be a curiosity-stopper, but if you said “electricity is electrons, and by the way they obey the Lorentz force equation [F = …] and Maxwell’s laws [del E = …]” that would be an explanation.
I wouldn’t call the luminiferous aether a curiosity-stopper, because it was an actual theory that did make predictions (it was essentially falsified in one experiment).
The luminiferous aether is also a brilliant example of how rationalists should and have formed hypotheses based on a combination of a priori logic, a hypothetical non-self-contradicting set of assumptions, and empirical evidence.
The expected statistical inference you could expect to get in which a theory is valid is very important to hypothesis formation.
A theoretical paradigm such as aether physics in all possible metalogical realities would be expected to be true more often than not, given what was known at the time.
At the time the theory was extremely apt in describing empirically-verifiable experiments. That’s exactly why I’m glad I was taught about the luminiferous aether from a very young age even though it is not a part of current contemporary physics.
With respect to scientific pedagogy I would therefore say it is very important that we continue to teach students about the history of scientific paradigms, even those paradigms since lost to progress.
While I understand how there are some questions that cannot be completely answered, I feel as though you have chosen to ignore the fact that science at that time was inadequate to really understand the underlying science. Even today there is no complete understanding of any field, just educated guesses based on experiments and observations. Elan vital was just one theory of attempting to describe why life happens, and it was based on the fact that life had something more than un-living matter. However, further experiments altered this theory. Would you say the same thing about Quantum Theory, or the Electromagnetic Spectrum, or even E=mc^2? So far, those theories, while truthful when modeling current events, have not been conclusively proven. However, by aggressively insinuating that anyone who uses a theory that has not be uncategorically proven as fact is lacking in rational thought, then you belittle the field of science and all that it has achieved.
You’re missing the point. ‘Elan vital’ was not even a theory of why life is different from non-life; it was merely a statement of the observation that life is different from non-life.
“Why are living things different from nonliving things?”
“Elan vital! (Where ‘elan vital’ is defined as ‘whatever causes living things to be different from nonliving things’)”
This exchange has not improved anyone’s understanding of life, but it is actually worse since it feels like you’ve explained something and so it’s a “curiosity-stopper”.
How is that a curiosity stopper. Either someone is satisfied with that explanation (like science), or they want to know more about elan vital. Then someone will find that the answer to what elan vital is is either mystical (and therefore bringing religion into the equation) or not known, in which case a curious person would want to find out how Elan vital functions, leading to new discoveries. Similarly now we have forces at the atomic level that we don’t understand how they function, and yet quantum theory is generally accepted as truth. How is this different than Elan Vital at the time?
Please elaborate, because on its face that statement does not seem accurate. We do understand how the electromagnetic, weak, and strong forces function. There are places where quantum field theory fails, but there are plenty of places where it succeeds and makes good predictions.
In contrast, “elan vital” doesn’t make any predictions. It doesn’t drive curiosity because there’s no way to test it and get results that we can then try to understand better.
Honestly, how much direct familiarity do you have with the actual historical vitalist theories, as opposed to third- or fourth-hand strawman accounts peppered with a few convenient soundbites, such as the one presented in the original post here?
One of the worst tendencies often seen on LW is the propensity to thrash these ridiculous strawmen instead of grappling with the real complexity of the history of ideas. Yes, historical scientific theories like vitalism and phlogiston have been falsified, but bashing people who held them centuries ago as dimwits who sought to mysticize the questions instead of elucidating them is sheer arrogant ignorance.
Even the original post itself lists an example where vitalism (i.e. its strong version) made concrete predictions that could be falsified, and which were indeed falsified by Woehler’s experiments. Another issue where (weaker) vitalism made falsifiable predictions that lead to hugely important insight was the question of the spontaneous generation of microorganisms (and molds etc.). It was a vitalist model that motivated Pasteur’s experiments that demonstrated that such generation does not occur and thus sterilized stuff remains such once sealed.
Yes, of course, nowadays we know better than all of these people, but bashing them is as silly as taking a sophomore course in relativity and then jeering at Galileo and Newton as ignorant idiots.
Edit: For those interested in the real history of vitalism rather than strawmen, here is a nice article:
http://mechanism.ucsd.edu/teaching/philbio/vitalism.htm
Yes, I see that in this case I was using “elan vital” as a stand-in example for “postulating an ontologically basic entity that just so happens to validate preconceived categories.”
It was an overstatement to say that elan vital makes no predictions, and I thank you for pointing that out. However, I think the average person probably heard the theory and just took it as a confirmation of a stereotypical non-materialist worldview, i.e. a curiosity-stopper.
Do you think this is significantly different from the average person’s interaction with modern scientific theories?
Probably not, but it takes a much more significant degree of willful misinterpretation somewhere along the line to construe modern scientific theories as supporting non-materialist worldviews.
I suppose that’s probably right—I guess people are more likely to think “science supports a materialistic worldview (but can’t explain everything)” (except when, like, quantum mechanics or superstrings or whatever come into play). So, less “non-materialst”, but still an appreciable degree of “curiosity stopping”. Hmm.
I don’t think that’s what Eliezer is doing here (Except maybe Kelvin, but he deserved it).
The point is not to bash the people who held these beliefs; the point is to see how we can do better.
And for the most part, there isn’t a point to “grappling with the real complexity of the history of ideas”. From this particular parable, we see more clearly that a hypothesis must constrain our anticipated experiences, and as a side note nothing is inherently mysterious. Moving on.
Ignorance is not the source of my arrogance. It is deserved pride.
The problem is that the “parable” is presented as an account of the actual historical vitalist theories. As such, it seriously misrepresents them and attributes to them intellectual errors of which they were not guilty in reality. It’s similar with other LW articles that use phlogiston as a whipping horse. If you look at a real historical account of these theories, you’ll see that they implied plenty of anticipated experiences, and were abandoned because they made incorrect predictions, not because they were empty of predictive power and empirical content.
As for “deserved pride,” if an exposition of your insight requires setting up strawmen to knock down, instead of applying it to real ideas actually held by smart and accomplished people, past or present, then something definitely seems fishy. Not to mention that pride is hardly a suitable emotion to feel just because you happen to live at a time in which you were able to absorb more knowledge than in earlier times—especially if this means feeling superior to people whose work was the basis and foundation of this contemporary knowledge, and their theories that provided decisive guidance in this work. Yes, you do know more than they did, but while they made decisive original contributions, what have you done besides just passively absorbing the existing knowledge?
Don’t worry, we’re not going to hang anybody for it.
But I am superior to them. I have a better understanding of the world. I can access most of human knowledge from a device that I keep in my pocket. I can travel hundreds of miles in a day. I have hot running water in my house. Yes, all these things are true because I “just happen” to live in this time. It makes me better than those who came before, and worse than those who will come after. Similarly, I am better than I was yesterday, and hopefully I am worse than I will be tomorrow.
Let us not forget Themistocles’s taunt: “I should not have been great if I had not been an Athenian, nor would you, were you an Athenian, have become Themistocles.” Perhaps Kelvin would have been greater than I had he been born in this time. But sadly he was not.
Rationality is no place for false humility, and we should not revere those who came before as though they were wiser than us. Be aware of your power and grow more powerful.
Also, it is questionable if our supposedly better individual understanding of the world would survive any practical tests outside of our narrow domains of expertise. After all, these days you only need to contribute some little details in a greatly complex system built and maintained by numerous others, of which you understand only a rough and vague outline, if even that. How much actual control over the world does your knowledge enable you to exert, outside of these highly contrived situations provided by the modern society?
One could argue that a good 19th century engineer had a much better understanding of the world judging by this criterion of practical control over it. These people really knew how to bootstrap complex technologies out of practically nothing. Nowadays, except perhaps for a handful of survivalist enthusiasts, we’d be as helpless as newborn babes if the support systems around us broke down. Which makes me wonder if our understanding of the world doesn’t involve even more “mysterious answers” for all practical purposes outside of our narrow domains of expertise. Yes, you can produce more technically correct statements about reality than anyone in the 19th century could, but what can you accomplish with that knowledge?
I’m not disputing your other points, but for most typical practical purposes I as good as know things that I don’t actually know, because I can make use of specialists, trading on my own specialty. The practical value of literally, on an individual level, knowing how to recreate technology from scratch is limited, outside of highly contrived situations such as those that are contrived by the scriptwriters of the MacGyver TV show. This could conceivably change in a sufficiently extreme survivalist scenario, though I have my doubts about the likelihood of an actual Robinson Crusoe scenario in which you literally have to do it all yourself with no possibility for specialization and trade. There are also books. If you have a good library, then you can have a lot of information at your fingertips should the need arise without literally having to have it in your head right now.
I don’t think we have any real disagreement here. Clearly, if the present system is not in danger of breaking down catastrophically (and it doesn’t seem to be, at least in the short to medium run), we’re better off with specialization. Unlike in the 19th century, we are technologically far beyond the limit of what could be created from scratch without enormous numbers of people working in highly specialized roles, and barring a cataclysmic breakdown, old-fashioned versatile technical skills are not worth the opportunity cost of acquiring them.
(I think you are underestimating the difficulty of translating information from books into actually getting things done, though. Think just how hard it is to cook competently from recipes if you’re a newbie.)
In the past, however, people didn’t have this luxury of living in a complex world where you can create value and prosper by specializing, and where you can acquire correct scientific knowledge from readily available sources. Yet with their crude provisional theories and primitive and self-reliant technical abilities, they managed to create the foundations for our present knowledge and technology out of almost nothing. I think we do owe them respect for this, as well as the recognition that their work required amazing practical skills that few, if any people have today, even if only because it’s no longer worthwhile to acquire them.
I’m not sure this is a good example because I’ve had great success cooking out of the Fannie Farmer cookbook. However, this does not negate your point about difficulty, because kitchen cooking is not necessarily representative of the difficulty of things in general.
Yes, this is one of those other points that I’m not disputing.
I disagree. If Isaac Newton believes I owe him something, he can call my lawyer, but I’m pretty sure I didn’t agree to anything of the sort.
Why would I want to assert control over the world outside of that context? I am in that context—that’s part of my point. I am a better human in part because I am a human with a computer and a car and a cellphone and the Internet. My descendants might be better in part because they are robots/cyborgs/uploaded/built out of nanobots. And we are all better because we are connected and able to perform tasks together that no lone ‘survivalist’ can.
I don’t know who you mean by “we,” but in any case, I don’t think objecting to misrepresentations and strawmen is unreasonable even if they’re directed against people who are long dead.
Then why the need to invent strawmen instead of discussing their actual ideas and theories?
What I want to emphasize is that grappling with reality successfully enough to make a great intellectual contribution is extremely hard. If a theory provides motivation and guidance for work that leads to great contributions, then it should be seen as a useful model, not an intellectual blunder—whatever its shortcomings, and however thoroughly its predictions have been falsified in the meantime. Historically, theories such as phlogiston, aether, or vitalism clearly satisfy this criterion.
Now of course, it makes sense to discuss how and why our modern theories are superior to phlogiston etc. What doesn’t make sense is going out of your way to bash strawmen of these theories as supposedly unscientific and full of bad reasoning. In reality, they were a product of the best scientific reasoning possible given the state of knowledge at the time, and moreover, they motivated the crucial work that led to our present knowledge, and to some degree even provided direct practically useful results.
I’m in a very nitpicky mood today:
‘Elan vital’ seems to predict that there won’t be things that are sort-of alive, like viruses; from what I’ve read about it it suggests that aliveness is all-or-nothing. It may also predict that things that are dead shouldn’t be able to be made to move by electrical stimulation of the nerves.
I think you’re right. ‘Elan vital’ sounds like a falsified theory, not an unfalsifiable one.
I’m new to reading this blog and am slowly going through the sequences. Eliezer, I’m enjoying your writings a lot and they are really helping to change my way of thinking.
A thought I had while reading this and figured I’d ask for other thoughts:
″ To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.”
I know people who are perfectly content to “worship their own ignorance.” Why do you think they don’t value knowing enough to go further? Is it just because they have hit a semantic roadblock and don’t realize it?
(Also, I have no idea how to correctly quote a post or a comment. Help?) :)
You can quote things by starting a paragraph with >.
Thanks for your help. You can delete this thread if you’d like.
And how do I unretract my comment? haha
I’m new to reading this blog and am slowly going through the sequences. Eliezer, I’m enjoying your writings a lot and they are really helping to change my way of thinking.
A thought I had while reading this and figured I’d ask for other thoughts:
I know people who are perfectly content to “worship their own ignorance.” Why do you think they don’t value knowing enough to go further? Is it just because they have hit a semantic roadblock and don’t realize it?
Yeah, I think it is. The one model we start with is the model of ourselves. Our hand moves because we will it to do so. If that were the only model I had, that’s how I’d interpret the universe—every event was the result of the will of some being.
And stopping at “The Wizard Did It” makes perfect sense. We experience our own decisions as sufficient causes for our own actions.
I wonder how long it took for the concept of mechanism to take hold.
The quotation feature works by preceding a paragraph with >, not by typing a pipe manually.
Thanks. I eventually found the page on markup. And the little envelope under my Karma that shows me the responses to my comments.
I would like to suggest that the concept of “beauty” in art, relationships and even evolutionary biology seems to satisfy EY’s criteria of being a mysterious answer.
If I ask, “how does the male peacock attract female peacocks” and one answers “because his tail is big and beautiful”, haven’t they failed to answer my question? Beauty in this response is a 1- curiosity stopper, 2- has no moving parts, 3- Is often uttered by people with a great deal of pride (the painting is so beautiful!), and 4- leaves the phenomenon a mystery (In the case of the peacock, I still don’t really know why female peacocks like big colorful tails).
Also, symmetry is a sign of health in bilaterians such as we; so it makes sense that we’d evolve to find symmetry beautiful.
The Handicap Principle is one possibility.
I understand why elan vital is a mysterious answer, but what makes the question mysterious? Isn’t the question “why does living matter move?” a perfectly intelligible one, and the point is simply that we can do a lot better in answering it than “elan vital”?
Funny to see how perfectly dark matter is used as a mysterious answer, in the sense of “How could the universe be expanding? Dark matter!” And I always thought that everything NGC told me as a kid was true and logical and rational. Another childhood memory crushed...
Dark matter and energy are theories that are a little more complicated than that, in that astronomers can observe gravitational clustering more powerful than the regular matter allows for; dark energy is beyond the areas of my understanding. So far it seems to me the best guess is Shut up and Calculate.
I thought the mysterious answer to that particular question was ‘dark energy’. I don’t think dark matter is enough to (not particularly) explain it.
What would you have had these biologists use instead? Would you prefer they had no model? It seems clear to me, though I may be wrong, that these scientists had a model (elan vital), and when later evidence came along (modern biology?), they discarded it in favor of a different model. Would you have them instead have picked a different model in the first place? Or have no model at all?
Having no model can be good, if it inspires you to search for a good model. Far worse to think you have a model when you actually don’t.
Well, if by “no model” you mean something like the contemporary folk model of biology (“Blood is what keeps you alive, we’re not quite sure how though, but in general try not to lose your blood”), then elan vital is definitely worse, in that it (a) adds no new information but (b) sounds wiser, and therefore harder to unseat.
This sounds sensible, though it should be mentioned that bloodletting (hm, there’s clearly too much blood here) seems like a candidate for folk biology as well.
(I once had a small, dark bruise underneath a partially healed cut- so it looked like there was this black thing inside my finger, though I was reasonably confident it was just a pool of blood. The urge to cut it open and drain it was unbelievably strong, and I had to put a bandaid on it just so that I couldn’t look at it. After that I had a lot more sympathy for people who thought bloodletting was a sensible treatment. I suspect that particular incident was an anti-parasite impulse which mistakenly pattern-matched the pool, and I imagine most bloodletting was inspired by “you’re way redder than is healthy- let’s fix that!”.)
Are you suggesting that we apply a punishment to any theory that sounds wise? Or that we apply a punishment only for those that also satisfy (a)?
It may make sense to apply some penalty to the logodds of “profound ideas”, to compensate for the bias.
Likewise, maybe we should assume that beautiful people are stupid to compensate for the halo effect—though that one is a bit trickier, because IQ actually is correlated with attractiveness, just not as strongly as people tend to assume.
Well, ideally we ignore b and focus only on a. B only matters in the context of being a more virulent meme.
For all the posts implying that people who came up with the concepts of phlogiston and elan vital were just using science without the benefit of today’s education is missing the point.
Today’s scientists come up with ideas like string theory or dark energy, but they don’t stop there: they are frantically trying to find evidence for them and so far failing. So they are just neat ideas that might explain a lot if shown to be true,, but not much more than that. General relativity goes on providing evidence supporting it, including the new evidence for “frame dragging”.
Phlogiston and elan vital were ideas that died for the LACK of evidence. The discoveries of oxygen and electrochemistry killed them. However, when the ideas were proposed, if you didn’t know the answer, you left it there or made something up. I might mention also caloric, which was a pure guess based on very little but quashed by science in the form of experiments whose results fitted the concepts of energy much better.
My mother’s husband professes to believe that our actions have no control over the way in which we die, but that “if you’re meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!” for example.
After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you “just can’t apply numbers to this,” and “Well, you shouldn’t tempt fate.”
My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?
Think of them as 3-year-olds who won’t grow up until after the Singularity. Would you kick a 3-year-old who made a mistake?
I jest, but the sense of the question is serious. I really do want to teach the people I’m close to how to get started on rationality, and I recognize that I’m not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?
Simply consider how likely it is that kicking them in the nuts will actually improve the situation.
(grin) Mostly, by remembering that there are lots of decent people in the world who don’t think very clearly.
Pick your battles. Most people happily hold contradictory beliefs. More accurately, their professed beliefs don’t always match their aliefs). You are probably just as affected as the rest of us, so start by noticing this in yourself.
Strictly speaking, if you somehow knew in advance (time travel?) that you would “die in a plane crash”, then avoiding flying would indeed, presumably, result in a plane crash occurring as you walk down the street.
If you know your attempt will fail in advance, you don’t need to try very hard. If you don’t, then it is reasonable to avoid dangerous situations.
I actually don’t believe this is true, for most mechanisms of “mysterious future knowledge”, including most (philosophical) forms of time travel that don’t allow change. Unless I had some specific details about the mechanism of prediction that changed the situation I would go ahead and try very hard despite knowing it is futile. I know this is a total waste… it’s as if I am just leaving $10,000 on the ground or something! (ie. I assert that newcomblike reasoning applies.)
I don’t understand this.
In Newcomb’s problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.
If Omega didn’t know what you would do with a fair degree of accuracy, two-boxing would work, obviously.
In this case you are trying (futilely) so that you, very crudely speaking, are less likely to be in the futile situation in the first place.
Yes, then it wouldn’t be Newcomb’s Problem. The important feature in the problem isn’t boxes with arbitrary amounts of money in them. It is about interacting with a powerful predictor whose prediction has already been made and acted upon. See, in particular, the Transparent Newcomb’s Problem (where you can ourtright see how much money is there). That makes the situation seem even more like this one.
Even closer would be the Transparent Newcomb’s Problem combined with an Omega that is only 99% accurate. You find yourself looking at an empty ‘big’ box. What do you do? I’m saying you still one box the empty box. That makes it far less likely that you will be in a situation where you see an empty box at all.
Being a person who avoids plane crashes makes it less likely that you will be told “you will die in a plane crash”, yes.
But probability is subjective—once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.
Absolutely. And I’m saying that you update that probability, perform a (naive) expected utility function calculation that says “don’t bother trying to prevent plane crashes” then go ahead and try to avoid plane crashes anyway. Because in this kind of situation maximising expected utility is actually a mistake.
(To those who consider this claim to be bizarre without seeing context, note that we are talking situations such as within time-loops.)
So … I should do things that result in less expected utility … why?
In the specific “infallible oracle says you’re going to die in a plane crash” scenario, you might live considerably longer by giving the cosmos fewer opportunities to throw plane crashes at you.
I was assuming a time was given. wedrifid was claiming that you should avoid plane-crash causing actions even if you know that the crash will occur regardless.
Yes, you are correct. Or at least it is true that I am not trying to make a “manipulate time of death” point. Let’s say we have been given a reliably predicted and literal “half life” that we know has already incorporated all our future actions.
OK.
So the odds of my receiving that message are the same as the odds of my death by plane, but having recieved it I can freely act to increase the odds of my plane-related death without repercussions. I think.
If you know the time, then that becomes even easier to deal with—there’s no particular need to avoid plane crash opportunities that do not take place at that time. In fact, it then becomes possible to try to avoid it by other means—for example, faking your own plane-crash-related demise and leaving the fake evidence there for the time traveller to find.
If you know the time of your death in advance, then the means become important only at or near that time.
Let’s take this a step further. (And for this reply I will neglect all acausal timey-wimey manipulation considerat ions.)
If you know the time of your death you have the chance to exploit your temporary immortality. Play Russian Roulette for cash. Contrive extreme scenarios that will either result in significant gain or certain death. The details of ensuring that it is hard to be seriously injured without outright death will take some arranging but there is a powerful “fixed point in time and space” to be exploited.
Problem with playing russian roulette under those circumstances is that you might suffer debilitating but technically nonfatal brain damage. It’s actually surprisingly difficult to arrange situations where there’s a chance of death but no chance of horrific incapacitation.
Yes, that was acknowledged as the limiting factor. However it is not a significant problem when playing a few of rounds of Russian Roulette. In fact, even assuming you play the Roulette to the death with two different people in sequence you still only create two bits of selection pressure towards the incapacitation. You can more than offset this comparatively trivial amount of increased injured-not-dead risk (relative to the average Russian Roulette player) by buying hollow point rounds for the gun and researching optimal form for suicide-by-handgun.
The point is, yes exploiting death-immunity for optimization other outcomes increases the risk of injury in the same proportion that the probability of the desired outcome is increased but this doesn’t become a significant factor for something as trivial as a moderate foray into Russian Roulette. It would definitely become a factor if you started trying to brute force 512 bit encryption with a death machine. That is, you would still end up with practically 0 chance of brute forcing the encryption and your expected outcome would come down to whether it is more likely for the machine to not work at all or for it to merely incapacitate you.
This is a situation where you really do have to shut up and multiply. If you try to push the anti-death too far you will just end up with (otherwise) low probability likely undesirable outcomes occurring. On the other hand if you completely ignore the influence of “death” outcomes being magically redacted from the set of possible outcomes you will definitely make incorrect expected utility calculations when deciding what is best to do. This is particularly the case given that there is a strict upper bound on how bad a “horrific incapacitation” can be. ie. It could hurt a bit for a few hours till your certain death.
This scenario is very different and far safer than many other “exploit the impossible physics” scenarios in as much as the possibility of bringing disaster upon yourself and others is comparatively low. (ie. In other scenarios it is comparatively simple/probable for the universe to just to throw a metaphorical meteor at you and kill everyone nearby as a way to stop your poorly calibrated munchkinism.)
I shall assume you mean “sufficiently low chance of horrific incapacitation for your purposes”.
It isn’t especially difficult for the kind of person who can get time travelling prophets to give him advice to also have a collaborator with a gun.
I have to admit, you’ve sort of lost me here.
Call P1 the probability that someone who plays Russian Roulette in a submarine will survive and suffer a debilitating injury. P1 is, I agree, negligible.
Call P2 the probability that someone who plays Russian Roulette in a submarine and survives will suffer a debilitating injury. P2 is, it seems clear, significantly larger than P1.
What you seem to be saying is that if I know with certainty (somehow or other) that I will die in an airplane, then I can safely play Russian Roulette in a submarine, because there’s (we posit for simplicity) only two things to worry about: death (which now has P=0) or non-fatal debilitating injury (which has always had P=P1, which is negligible for my purposes).
But I’m not quite clear on why, once I become certain I won’t die, my probability of a non-fatal debilitating injury doesn’t immediately become P2.
The probability does become P2, but in many cases, we can argue that P2 is negligible as well.
In the submarine case, things are weird, because your chances of dying are quite high even if you win at Russian Roulette. So let’s consider the plain Russian Roulette: you and another person take turns trying to shoot yourself until one of you succeeds. For simplicity’s sake, suppose that each of you is equally likely to win.
Then P1 = Pr[injured & survive] and P2 = Pr[injured | survive] = P1 / Pr[survive]. But Pr[survive] is always at least 1/2: if your opponent shoots himself before you do, then you definitely survive. Therefore P2 is at most twice P1, and is negligible whenever P1 is negligible.
I see how you calculated that, but I think you’re looking at the wrong pieces of evidence, and I agree with TheOtherDave.
You have an even split chance of getting the real bullet in play, so let’s put that down:
P[bullet] = 0.5 P[¬bullet] = 0.5
Then, given that you DO get the bullet, you have a very high chance of being dead if you don’t know how you will die:
P[die | bullet] = 0.99 P[¬die | bullet] = 0.01
Of course, this means that overall, P[die] = 0.495, and P[injury] = 0.005. However, if you already also know that P[¬die] = 1, then...
P[bullet & ¬die] = 0.5 P[¬bullet & ¬die] = 0.5
...because P[bullet] is computed before its causal effects (death or injury) can enter the picture, which means you’re left with P[injury] = 0.5 (a hundred times larger than P1!).
Thus, while the first chance of injury is negligible, the chance of injury once you already know that you won’t die is massively larger, given that P[injury XOR death | bullet] = 1 (which is implied in the problem statement, I would assume).
Edit: I realize that this makes the assumption that your chances of getting the bullet doesn’t correlate with knowing how you will die, but it most clearly exposes the difference between your calculation and other possible calculations. This is not the correct way to calculate the probabilities in real life, since it’s much more likely that non-death is achieved by not having the bullet in the first place (or by failing to play Russian Roulette at all), but there’s all kinds of parameters you can play with here. All I’m saying is that P2 isn’t necessarily at most twice P1, it all depends on the other implicit conditions and priors.
No, that’s not right. What we’re interested in here, is P[injury|¬die]. Using Bayes’ Theroem:
P[injury|¬die] = {P[¬die|injury]*P[injury]}/P[¬die]
Using the figures you assume, and recalling that “injury” refers only to non-fatal injury (hence P[¬die|injury]==1):
P[injury|¬die] = {1*0.005}/0.505 = 1⁄101 = approx. 0.00990099
The chances of injury are then not quite double what they would have been without death-immunity. This is reasonably low, because the prior odds of survival at all are reasonably high (0.505) - had the experiment been riskier, such that there was only a 0.01% chance of survival overall, then the chance of injury in the death-immunity case would be correspondingly higher.
(We also have not yet taken into account the effect of the first player—in such a game of Russian Roulette, he who shoots first has a higher prior probability of death).
Thanks for the full Bayes Theorem breakdown.
I agree that this is how it should be reasoned ideally, which I only realized after first posting the grandparent. See other comments and the edit for how I arrived at the 50⁄50 reasoning. If you know the answer for the bottom/last question in this comment, I’d be interested to know.
It depends on how exactly this time-travel-related knowledge of how you die will work. My calculation is correct if a random self-consistent time loop is chosen (which I think is reasonable) -- there are far more self-consistent time loops in which you survive because you don’t get the bullet, than ones in which you survive because a bullet failed to kill you.
Terrible things start happening if there’s some sort of “lazy pruning” of possibilities (which I think is what you’re suggesting). Then the probability you get shot is 0.5, and then if you do get shot the self-consistency condition eliminates the branches in which you die, so you are nearly guaranteed to survive in some horrible fashion.
I don’t like the second option because it requires thinking of branching possibilities as some sort of actual discrete things, which I find dubious. But it’s a bit silly to argue about interpretations of time travel, isn’t it?
This is a bit of what I was getting at with the edit in the grandparent: basically, it’s not very bayesian to stick to a 50⁄50 bullet chance when you know you will not die.
However, I was also considering the least convenient possible world: You already know that you won’t die, and since the worst you have to fear is debilitating permanent non-fatal injury (which you apparently don’t care about if it’s for Science!), you decide to repeatedly play Russian Roulette with tons of people, just to test probabilities or for fun or something.
Then what happens? Does it become more probable that you’ll just randomly end up always not getting the bullet with .98 probability (if the chance of surviving a bullet was 1%), which will to an outside view apparently defy all probabilities? Or does it instead stick to the 50⁄50 and on average you’ll get injured every second game (assuming you have a way of recovering from the injuries and playing again) without ever dying?
More importantly, which scenario should an ideal bayesian agent expect? This I have no idea, which is why I think it’s not trivial or obvious that P2 = 2*P1.
The calculation that P2=2*P1 obviously only applies to one game. If you play lots of games sequentially, then the probability increases stack. (Edit: I incorrectly said that the ratio doubles with every game, which is obviously false)
Another way of thinking about this: absent time travel, if you survive a bullet with 1% probability, then after N games your probability of surviving unscathed is 1/2^N, and your total probability of surviving is (101/200)^N. Therefore, given that you survive, your probability of surviving unscathed should be the ratio of these, or (100/101)^N.
(All of this is assuming the random self-consistent time loop interpretation of time travel.)
Hmm, interesting. That would imply that, to a third-party, there’s some random guy who wins 99% of the time at Russian Roulette. At this point, it should legally be considered murder.
Death sentence by plane crash sounds appropriate, in this case.
It is murder, but you’re going to have terrible trouble proving that (especially if he’s careful about documenting how fair the russian roulette is). To avoid murder charges, the hypothetical psychopathic death-immune person can go so far as to arrange a tournament, with 2^n entries for some integer n. In this arrangement, one person must survive every round, and thus it does not look suspicious afterwards that he did survive every round (plus he gets 2^n prizes for going through n rounds).
I’m pretty sure that the russian roulette itself is illegal just about everywhere, though. No matter how it’s done, it’s either murder or assisted suicide.
Or only enter other people’s tournaments and have them document their own procedure.
I would have expected a different label to apply. Neither of those seems accurate. In fact I didn’t think even assisted suicide got called “assisted suicide”.
To be precise (and absent any other interventions) P2 is larger than P1 by a factor of 2 (in two person to the death with randomized start case).
I thought I was fairly clear that that was exactly what I was arguing. Including the implication that it doesn’t immediately become more than P2. This (and other previously unlikely failure modes) have to be considered seriously but to overweight their importance is a mistake.
Ah! I see. Yeah, it seems I wasn’t thinking about Actual Russian Roulette, in which two players take turns and the most likely route to survival is my opponent blowing his brains out first, but rather Utterly Ridiculous Hypothetical Variation on Russian Roulette, in which I simply pull the trigger over and over while pointing at my own head, and the most likely route to survival is a nonfatal bullet wound.
Ahh, yes. That seems to be an impractical course of action even with faux-immortality. It may be worthwhile if some strange person was willing to pay exorbitant amounts of cash per shot and you were also permitted to spin the cartridge after every shot (or five shots) in order to randomize it. Then the accumulated improbability (ie. magnified injury and ‘unknown black swan’ possibility) would ultimately injure or otherwise interrupt you but only after your legacy had been improved significantly.
I’ve made the exact same mistake before. Maybe there should be (or is) a name for that.
It does become P2. But… aside from the submarine issue… if you play Russian Roulette against one other person, then with probability 1⁄2, the other person gets shot before you do. Assuming that the loser either dies or is incapacitated, we can write P1 as Pr[survive|shot] and P2 as Pr[shot|survive], and by a simple application of Bayes P2 is at most twice P1.
If you’re the sort of person who would take advantage of such knowledge to engage in dangerous activities, does that increase the probability that your reported time of death will be really soon?
On the other hand, if you’re the kind of person who (on discovering that you will die in a plane crash) takes care to avoid plane crashes, wouldn’t that increase your expected life span?
Moreover, these two attitudes—avoiding plane crashes and engaging in non-plane-related risky activities—are not mutually exclusive.
Absolutely. Note the parenthetical. The grandparent adopted the policy of ignoring this kind of consideration for the purpose of exploring the implied tangent a little further. I actually think not actively avoiding death, particularly death by the means predicted, is a mistake.
You can do the same thing if you know only the means of your death and not the time in advance; merely set up your death-stunts to avoid that means of death. (For example, if you know with certainty that you will die in a plane crash but not when, you can play Russian Roulette for cash on a submarine).
And then the experimental aqua-plane crashes into you.
An important safety tip becomes clear; if you’re involved in a time loop and know the means of your death, then keep a very close eye on scientific literature and engineering projects. Make sure that you hear the rumours of the aqua-plane before it is built and can thus plan accordingly...
… True.
But you could still be injured by a plane crash or other mishap at another time, at standard probabilities.
And you should still charter your own plane to avoid collateral damage.
I am happy to continue the conversation if you are interested. I am trying to unpack just where your intuitions diverge from mine. I’d like to know what your choice would be when faced with Newcomb’s Problem with transparent boxes and an imperfect predictor when you notice that the large box is empty. I take the empty large box, which isn’t a choice that maximises my expected utility and in fact gives me nothing, which is the worst possible outcome from that game. What do you do?
Two boxes, sitting there on the ground, unguarded, no traps, nobody else has a legal claim to the contents? Seriously? You can have the empty one if you’d like, I’ll take the one with the money. If you ask nicely I might even give you half.
I don’t understand what you’re gaining from this “rationality” that won’t let you accept a free lunch when an insane godlike being drops it in your lap.
A million dollars.
No, you’re not. You’re getting an empty box, and hoping that by doing so you’ll convince Omega to put a million dollars in the next box, or in a box presented to you in some alternate universe.
And by this exact reasoning, which Omega has successfully predicted, you will one-box, and thus Omega has successfully predicted that you will one-box and made the correct decision to leave the box empty.
Remember to trace your causal arrows both ways if you want a winning CDT.
Remember also Omega is a superintelligence. The recursive prediction is exactly why it’s rational to “irrationally” one-box.
Yes, that’s why I took the one box with more money in it.
Strictly speaking the scenario being discussed is one in which Omega left a transparent box of money and another transparent box which was empty in front of Wedrifid, then I came by, confirmed Wedrifid’s disinterest in the money, and left the scene marginally richer. I personally have never been offered money by Omega, don’t expect to be any time soon, and am comfortable with the possibility of not being able to outwit something that’s defined as being vastly smarter than me.
Remember also Omega is an insane superintelligence, with unlimited resources but no clear agenda beyond boredom. If appeasing such an entity was my best prospect for survival, I would develop whatever specialized cognitive structures were necessary; it’s not, so I don’t, and consider myself lucky.
Ah, then in that case, you win. With that scenario there’s really nothing you could do better than what you propose. I was under the impression you were discussing a standard transparent Newcomb.
Oh, so you pay counterfactual muggers?
All is explained.
The counterfactual mugging isn’t that strange if you think of it as a form of entrance fee for a positive-expected-utility bet—a bet you happened to lose in this instance, but it is good to have the decision theory that will allow you to enter it in the abstract.
The problem is that people aren’t that good in understanding that your specific decision isn’t separate from your decision theory under a specific context … DecisionTheory(Context)=Decision. To have your decision theory be a winning decision theory in general, you may have to eventually accept some individual ‘losing’ decisions: That’s the price to pay for having a winning decision theory overall.
I doubt that a decision theory that simply refuses to update on certain forms of evidence can win consistently.
If Parfit’s hitchhiker “updates” on the fact that he’s now reached the city and therefore doesn’t need to pay the driver, and furthermore if Parfit’s hitchhiker knows in advance that he’ll update on that fact in that manner, then he’ll die.
If right now we had mindscanners/simulators that could perform such counterfactual experiments on our minds, and if this sort of bet could therefore become part of everyday existence, being the sort of person that pays the counterfactual mugger would eventually be seen by all to be of positive-utility—because such people would eventually be offered the winning side of that bet (free money in the tenfold of your cost).
While the sort of person that wouldn’t be paying the counterfactual mugger would never be given such free money at all.
If, and only if, you regularly encounter such bets.
The likelihood of encountering the winning side of the bet is proportional to the likelihood of encountering its losing side. As such, whether you are likely to encounter the bet once in your lifetime, or to encounter it a hundred times, doesn’t seem to significantly affect the decision theory you ought possess in advance if you want to maximize your utility.
In addition to Omega asking you to give him 100$ because the coin came up tails, also imagine Omega coming to you and saying “Here’s 100,000$, because the coin came up heads and you’re the type of person that would have given me 100$ if it had come up tails.”
That scenario makes it obvious to me that being the person that would give Omega 100$ if it had come up heads is the winning type of person...
If the coin therein is defined as a quantum one then yes, without hesitation. If it is a logical coin then things get complicated.
This is more ambiguous than you realize. Sure, the dismissive part came through but it doesn’t quite give your answer. ie. Not all people would give the same response to counterfactual mugging as Transparent Probabilistic Newcomb’s and you may notice that even I had to provide multiple caveats to provide my own answer there despite for most part making the same kind of decision.
Let’s just assume your answer is “Two Box!”. In that case I wonder whether the problem is that you just outright two box on pure Newcomb’s Problem or whether you revert to CDT intuitions when the details get complicated. Assuming you win at Newcomb’s Problem but two box on the variant then I suppose that would indicate the problem is in one of:
Being able to see the money rather than being merely being aware of it through abstract thought switched you into a CDT based ‘near mode’ thought pattern.
Changing the problem from a simplified “assume a spherical cow of uniform density” problem to one that actually allows uncertainty changes things for you. (It does for some.)
You want to be the kind of person who two-boxes when unlucky even though this means that you may actually not have been unlucky at all but instead have manufactured your own undesirable circumstance. (Even more people stumble here, assuming they get this far.)
The most generous assumption would be that your problem comes at the final option—that one is actually damn confusing. However I note that your previous comments about always updating on the free money available and then following expected utility maximisation are only really compatible with the option “outright two box on simple Newcomb’s Problem”. In that case all the extra discussion here is kind of redundant!
I think we need a nice simple visual taxonomy of where people fall regarding decision theoretic bullet-biting. It would save so much time when this kind of thing. Then when a new situation comes up (like this one with dealing with time traveling prophets) we could skip straight to, for example, “Oh, you’re a Newcomb’s One-Boxer but a Transparent Two-Boxer. To be consistent with that kind of implied decision algorithm then yes, you would not bother with flight-risk avoidance.”
Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.
“Free will” isn’t incompatible with a predictable (by Omega) universe. I also doubt that all CDTers believe the same thing about human free will in said universe.
I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can’t predict what’s meant to happen. In that case, he’s equally good at predicting any outcome, so it’s a perfectly uninformative hypothesis.
That was exactly my point. If he could make such a prediction, he would be correct. Since he can’t...
I often say stuff like that, but I don’t mean it literally. When someone says “What if you do X and Y happens?” and I think Y is ridiculously unlikely (P(Y|X) < 1e-6), I sarcastically reply “What if I don’t do X, but Z happens?” where Z is obviously even more ridiculous (P(Z|~X) < 1e-12, e.g. “a meteorite falls onto my head and kills me”).
No credit to Nietzsche for the analogy?
Another example: during the conversation between Deepak Chopra and Richard Dawkins, Deepak Chopra thinks that our lack of a very good understanding for the origin of language or jumps in the fissile record for example means that an actual discontinuity happened.
I completely accept and (I think) understand this, however there are some phenomena that cannot, by their nature, be known.
A typical example is Cantor’s proof that it is impossible to prove that there are “mid-sized infinities. More generally, Godel’s incompleteness theorems prove that some things are ever unknowable. (If I’m misunderstanding or misrepresenting, enlighten me. I’m no mathematician.)
More controversially, I suspect that consciousnesses may present a similar problem (for different reasons).
These might be described as inherently mysterious phenomena.
Hi Capla—no that is not what Godel’s theorem says (actually there are two incompleteness theorems)
1) Godel’s theorems don’t talk about what is knowable—only about what is (formally) provable in a mathematical or logic sense
2) The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an any sort of algorithm is capable of proving all truths about the relations of the natural numbers. In other words for any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system. The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency.
3) This doesn’t mean that some things can never be proven—although it provides some challenges—it does mean that we cannot create a consistent system (within itself) that can demonstrate or prove (algorithmically) all things that are true for that system
This creates some significant challenges for AI and consciousness—but perhaps not insurmountable ones.
For example—as far as i know—Godel’s theorem rests on classical logic. Quantum logic—where something can be both “true” and “not true” at the same time may provide some different outcomes
Regarding consciousness—I think I would agree with the thrust of this post—that we cannot yet fully explain or reproduce consciousness (hell we have trouble defining it) does not mean that it will forever be beyond reach. Consciousness is only mysterious because of our lack of knowledge of it
And we are learning more all the time
http://www.ted.com/talks/nancy_kanwisher_the_brain_is_a_swiss_army_knife? http://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness?
we are starting to unravel how some of the mechanisms by which consciousness emerges from the brain—since consciousness appears to be process phenomena rather rather than a physical property
Thank you, A little bit more informed.
My issue with consciousness involves p-zombies. Any experiment that wanted to understand consciousness, would have to be able to detect it, which seems to me to be philosophically impossible. To be more specific, any scientific investigation of the cause of consciousness would have (to simplify) an independent variable that we could manipulate to see if consciousness is present or not, depending on the manipulated variable. We assume that those around us are conscious, and we have good reason to do so, but we can’t rely on that assumption in any experiment in which we are investigating consciousness.
As Eliezer points out, that an individual says he’s conscious is a pretty good signal of consciousness, but we can’t necessarily rely on that signal for non-human minds. A conscious AI may never talk about it’s internal states depending on its structure (humans have a survival advantage to social sharing of internal realities). On the flip side, a savvy but non-conscious AI, may talk about it’s “internal states” because it is guessing the teacher’s password in the realist way imaginable: it has no understanding whatsoever of what those state are, but computes that aping them will accomplish it’s goals. I don’t know how we could possibly know if the AI is aping conciseness for it own ends or if it actually is conscious. If consciousness is thus undetectable, I can’t see how science can investigate it.
That said, I am very well aware that “Throughout history, every mystery, ever solved has turned out to be not magic” and that ever single time something has seemed inscrutable to science, a reductionist explanation eventually, surfaced. Knowing this, I have to seriously down grade my confidence that “No, really, this time it is different. Science really can’t pierce this veil.” I look forward to someone coming forward with somthign clever that dissolves the question, but even so, it does seem inscrutable.
Desiring a mysterious explantion is like wanting to be in a room with no people inside. Once you explain it it’s not mysterious any more. The property depends on your actions: emptyness is destroyed by you entering, mysticism is destroyed by you explaining it. Just an alternative to the map-territory way of putting it
How is “elan vital” different from, lets say “higgs bozon” in physics ? Both are hypothetical parts of reality, which needs further confirmation, and more detailed description.
The Higgs Boson has been confirmed. I suppose the wider point was something on the lines that “all unconfirmed hypotheses should be treated equally”. Rationalists typically do not favour a level playing field, and prefer hypotheses that ire in line with broad principles that have been successful in the past—principles like reductionism, materialism, and , in earlier days determinism.
In fact I have no idea what a higgs boson is, but a physicist tells me it makes for a simpler mathematical system used to predict our experience. We can imagine actually getting evidence that would let us make a more detailed description. (I don’t know if that’s still true in a practical sense, but I believe it used to be true. At worst, all that we lack is energy.)
Meanwhile, “elan vital” makes no predictions except maybe negative ones, and “more detailed description” seems impossible even in principle without special revelation. Unless Eliezer is misreading Kelvin, the esteemed writer actually rules out any such discovery. From an abstract standpoint, the theory can’t be expanded if we can’t get the evidence to justify more details.
Elan Vital is a family of theories some of which could be predictive in principle.
My summary: A mysterious answer is a fake explanation that acts as a semantic stop sign. Signs for mysterious answers:
Explanation acts as curiousity-stopper rather than anticipation-controller
Hypothesis is a black box (no underlying principles to derive from)
Social indication that people cherish their ignorance
According to the internet, “elan vital” was coined by Henri Bergson, but his “Creative Evolution” book is aware of this critique of vitalism, and asserts that the term “vital principle” is to be understood as a question to be answered (what distinguishes life from non-life?). He gives the “elan vital”/”vital impetus” as an answer to the question of what the vital principle is.
Roughly speaking[1], he proposes viewing evolution as an entropic force, and so argues that natural selection does not explain the origin of species, but that rather the origin of species must be understood in terms of the different macrostates that are possible. The macrostate itself is the “elan vital” (and can differ by species), though of course the actual macrostate is distinct from the set of possible macrostates, which is determined by the environment and something that he calls the “original impetus”.
A central example he uses is eyes. He argues that light causes the possibility of vision, which causes eyes; and that different functions of vision (e.g. acquiring food) cause the eyes to have varying degrees of development (from eyespots to highly advanced eyes).[2]
The meaning of the original impetus is less clear than the meaning of the vital impetus. He defines the original impetus as something that was passed in the germ from the original life to modern life, and which explains the strong similarity across lifeforms (again bringing up how different species have similar eye structures). I guess in modern terms the original impetus would be closely related to mitosis and transcription.
(The book was released prior to the discovery of DNA as the unit of heredity, but after the Origin of Species. Around the time the central dogma of molecular biology was becoming a thing.)
… This description of his view actually makes me wonder if the rationalist community has been unfair to Beff Jezos’ assertion that increasing entropy is the meaning of life.
Using my terminology, not his. YMMV about whether it is actually accurate, though a more relevant point is that he goes in depth about the need to understand things, and basically doesn’t support mysterious answers to mysterious questions at all. He merely opposes complex answers to simple questions.
This is in contrast to our modern standard model of evolution, namely Fisher’s infinitesimal model combined with natural selection, which would argue that random mutations increase the genetic variance in presence of photoreceptiveness of cells, number of photoreceptive cells, etc., which increases the genetic variance in sight, and which in turn increases the genetic variance in fitness, which then gradually selects for eyes. Henri Bergson argues this does not explain sight because it doesn’t explain why there are all these different things that could correlate to produce sight. Meanwhile light does explain the presence of these correlations, and so is a better explanation of sight than natural selection is.
Full quote: