Rationality Reading Group: Part D: Mysterious Answers
This is part of a semi-monthly reading group on Eliezer Yudkowsky’s ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.
Welcome to the Rationality reading group. This week we discuss Part D: Mysterious Answers (pp. 117-191). This post summarizes each article of the sequence, linking to the original LessWrong post where available.
D. Mysterious Answers
30. Fake Explanations—People think that fake explanations use words like “magic,” while real explanations use scientific words like “heat conduction.” But being a real explanation isn’t a matter of literary genre. Scientific-sounding words aren’t enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well “explain” the opposite of what you observed.
31. Guessing the Teacher’s Password—In schools, “education” often consists of having students memorize answers to specific questions (i.e., the “teacher’s password”), rather than learning a predictive model that says what is and isn’t likely to happen. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion. Don’t do that: any explanation you give should have a predictive model behind it. If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result.
32. Science as Attire—You don’t understand the phrase “because of evolution” unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the “scientific” tribe. Similarly, it isn’t scientific to reject strongly superhuman AI only because it sounds like science fiction. A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don’t constrain anticipation, they are probably just passwords or attire.
33. Fake Causality—It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems (GAIs) would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.
34. Semantic Stopsigns—There are certain words and phrases that act as “stopsigns” to thinking. They aren’t actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying “don’t ask any questions.”
35. Mysterious Answers to Mysterious Questions—The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an “elan vital”. This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.
36. The Futility of Emergence—The theory of “emergence” has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren’t able to make any new predictions.
37. Say Not “Complexity”—The concept of complexity isn’t meaningless, but too often people assume that adding complexity to a system they don’t understand will improve it. If you don’t know how to solve a problem, adding complexity won’t help; better to say “I have no idea” than to say “complexity” and think you’ve reached an answer.
38. Positive Bias: Look into the Dark—Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.
39. Lawful Uncertainty—Facing a random scenario, the correct solution is really not to behave randomly. Faced with an irrational universe, throwing away your rationality won’t help.
40. My Wild and Reckless Youth—Traditional rationality (without Bayes’ Theorem) allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable. Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes.
41. Failing to Learn from History—There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it. It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.
42. Making History Available—It’s easy not to take the lessons of history seriously; our brains aren’t well-equipped to translate dry facts into experiences. But imagine living through the whole of human history—imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again—and you’ll be less shocked by the strangeness of the next era.
43. Explain/Worship/Ignore? - When you encounter something you don’t understand, you have three options: to seek an explanation, knowing that that explanation will itself require an explanation; to avoid thinking about the mystery at all; or to embrace the mysteriousness of the world and worship your confusion.
44. “Science” as Curiosity-Stopper—Although science does have explanations for phenomena, it is not enough to simply say that “Science!” is responsible for how something works—nor is it enough to appeal to something more specific like “electricity” or “conduction”. Yet for many people, simply noting that “Science has an answer” is enough to make them no longer curious about how it works. In that respect, “Science” is no different from more blatant curiosity-stoppers like “God did it!” But you shouldn’t let your interest die simply because someone else knows the answer (which is a rather strange heuristic anyway): You should only be satisfied with a predictive model, and how a given phenomenon fits into that model.
45. Truly Part of You—Any time you believe you’ve learned something, you should ask yourself, “Could I re-generate this knowledge if it were somehow deleted from my mind, and how would I do so?” If the supposed knowledge is just empty buzzwords, you will recognize that you can’t, and therefore that you haven’t learned anything. But if it’s an actual model of reality, this method will reinforce how the knowledge is entangled with the rest of the world, enabling you to apply it to other domains, and know when you need to update those beliefs. It will have become “truly part of you”, growing and changing with the rest of your knowledge.
Interlude: The Simple Truth
This has been a collection of notes on the assigned sequence for this week. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!
The next reading will cover Part E: Overly Convenient Excuses (pp. 211-252). The discussion will go live on Wednesday, 15 July 2015 at or around 6 p.m. PDT, right here on the discussion forum of LessWrong.
- 16 Jul 2015 17:10 UTC; 5 points) 's comment on Rationality Reading Group: Part E: Overly Convenient Excuses by (
The sequence on emergence seems to be a bit controversial. I agree with many comments in that I’ve understood the term emergence as “a result of interacting smaller parts eventually explainable by science” as opposed to “mystical”. It’s sort of like the wishful thinking in programming, a thinking tool to produce hyptheses. You start with a rough idea and then you fill in the details.
I saw the point of the sequence on emergence as that oftentimes people label the rough idea, and then say “Done!”, and the language used to discuss whether or not they are actually done rarely distinguishes between labeling a function and implementing that function in code.
I agree that there is a meaningful technical concept there—the dynamics of the interactions of components are different than the dynamics of those components, though they are reducible to them—but I think that EY is right to complain that unless you’ve done the math to figure out what those interaction dynamics are, you don’t have much more predictive power than you did before.
In addition to that, “X is emergent” implies “X doesn’t go all the way down”. So, wetness is emergent, but energy probably is not. The reason why some people are excited about emergence, I’ll wager, is that it lets them resist what I’ll call the Cherry Pion fallacy (i.e. “no cherry pie without cherry pions”).
Now, that may not be very profound. But it’s not completely empty.
I’m curious about why The Simple Truth was included in Rationality as opposed to The Useful Idea of Truth.
‘The Useful Idea of Truth’ will be included in another future ebook—one that collects the contents for Highly Advanced Epistemology 101 for Beginners.
I don’t think the posts serve similar functions, even though they cover some similar topics. ‘The Simple Truth’ I think works best as a light-hearted summary of (and slight elaboration on) issues that have already been raised, crystallizing and tying together existing ideas. ‘The Useful Idea of Truth’ is more introductory, though it also feels like it’s building up to a discussion of cognition and metaphysics—of ‘what kinds of sentences can be meaningful’—rather than closely connecting with the content of Fake Beliefs, Mysterious Answers, and How To Actually Change Your Mind.
more info, pls
Sure! Plans aren’t super concrete yet. The Highly Advanced Epistemology 101 for Beginners ebook may come out sooner than the print versions of Rationality: From AI to Zombies, because it was quite popular and may turn out to be a more useful quick-and-dirty introduction to MIRI researchers’ philosophical outlook.
What occurs to me that The Simple Truth seems to be about using knowledge for getting things done vs. using knowledge to gain status—isn’t that the whole point of being a “Sophisticus Maximus”? Yet I don’t see this difference stressed a lot in the sequences: I think Eliezer is highly used to the Silicion Valley type of people where using knowledge for getting things done is taken for granted. Putting it differently, they live in a culture where you cannot gain status from knowledge if you don’t use it for getting things done. Also—Dijkstra relevant, but I will put that into the quotes thread.