Double Illusion of Transparency
Followup to: Explainers Shoot High, Illusion of Transparency
My first true foray into Bayes For Everyone was writing An Intuitive Explanation of Bayesian Reasoning, still one of my most popular works. This is the Intuitive Explanation’s origin story.
In December of 2002, I’d been sermonizing in a habitual IRC channels about what seemed to me like a very straightforward idea: How words, like all other useful forms of thought, are secretly a disguised form of Bayesian inference. I thought I was explaining clearly, and yet there was one fellow, it seemed, who didn’t get it. This worried me, because this was someone who’d been very enthusiastic about my Bayesian sermons up to that point. He’d gone around telling people that Bayes was “the secret of the universe”, a phrase I’d been known to use.
So I went into a private IRC conversation to clear up the sticking point.
And he still didn’t get it.
I took a step back and explained the immediate prerequisites, which I had thought would be obvious -
He didn’t understand my explanation of the prerequisites.
In desperation, I recursed all the way back to Bayes’s Theorem, the ultimate foundation stone of -
He didn’t know how to apply Bayes’s Theorem to update the probability that a fruit is a banana, after it is observed to be yellow. He kept mixing up p(b|y) and p(y|b).
It seems like a small thing, I know. It’s strange how small things can trigger major life-realizations. Any former TAs among my readers are probably laughing: I hadn’t realized, until then, that instructors got misleading feedback. Robin commented yesterday that the best way to aim your explanations is feedback from the intended audience, “an advantage teachers often have”. But what if self-anchoring also causes you to overestimate how much understanding appears in your feedback?
I fell prey to a double illusion of transparency. First, I assumed that my words meant what I intended them to mean—that my listeners heard my intentions as though they were transparent. Second, when someone repeated back my sentences using slightly different word orderings, I assumed that what I heard was what they had intended to say. As if all words were transparent windows into thought, in both directions.
I thought that if I said, “Hey, guess what I noticed today! Bayes’s Theorem is the secret of the universe!”, and someone else said, “Yes! Bayes’s Theorem is the secret of the universe!”, then this was what a successful teacher-student interaction looked like: knowledge conveyed and verified. I’d read Pirsig and I knew, in theory, about how students learn to repeat back what the teacher says in slightly different words. But I thought of that as a deliberate tactic to get good grades, and I wasn’t grading anyone.
This may sound odd, but until that very day, I hadn’t realized why there were such things as universities. I’d thought it was just rent-seekers who’d gotten a lock on the credentialing system. Why would you need teachers to learn? That was what books were for.
But now a great and terrible light was dawning upon me. Genuinely explaining complicated things took months or years, and an entire university infrastructure with painstakingly crafted textbooks and professional instructors. You couldn’t just tell people.
You’re laughing at me right now, academic readers; but think back and you’ll realize that academics are generally very careful not to tell the general population how difficult it is to explain things, because it would come across as condescending. Physicists can’t just say, “What we do is beyond your comprehension, foolish mortal” when Congress is considering their funding. Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother. I believed him. I was shocked to discover it wasn’t true.
But once I realized, it became horribly clear why no one had picked up and run with any of the wonderful ideas I’d been telling about Artificial Intelligence.
If I wanted to explain all these marvelous ideas I had, I’d have to go back, and back, and back. I’d have to start with the things I’d figured out before I was even thinking about Artificial Intelligence, the foundations without which nothing else would make sense.
Like all that stuff I’d worked out about human rationality, back at the dawn of time.
Which I’d considerably reworked after receiving my Bayesian Enlightenment. But either way, I had to start with the foundations. Nothing I said about AI was going to make sense unless I started at the beginning. My listeners would just decide that emergence was a better explanation.
And the beginning of all things in the reworked version was Bayes, to which there didn’t seem to be any decent online introduction for newbies. Most sources just stated Bayes’s Theorem and defined the terms. This, I now realized, was not going to be sufficient. The online sources I saw didn’t even say why Bayes’s Theorem was important. E. T. Jaynes seemed to get it, but Jaynes spoke only in calculus—no hope for novices there.
So I mentally consigned everything I’d written before 2003 to the trash heap—it was mostly obsolete in the wake of my Bayesian Enlightenment, anyway—and started over at what I fondly conceived to be the beginning.
(It wasn’t.)
And I would explain it so clearly that even grade school students would get it.
(They didn’t.)
I had, and have, much left to learn about explaining. But that’s how it all began.
- Alignment Research Field Guide by 8 Mar 2019 19:57 UTC; 266 points) (
- What cognitive biases feel like from the inside by 3 Jan 2020 14:24 UTC; 253 points) (
- Call For Distillers by 4 Apr 2022 18:25 UTC; 206 points) (
- Interfaces as a Scarce Resource by 5 Mar 2020 18:20 UTC; 187 points) (
- The Plan − 2023 Version by 29 Dec 2023 23:34 UTC; 146 points) (
- Why Not Just Outsource Alignment Research To An AI? by 9 Mar 2023 21:49 UTC; 127 points) (
- No One Knows What Science Doesn’t Know by 25 Oct 2007 23:47 UTC; 91 points) (
- Circling as Cousin to Rationality by 1 Jan 2020 1:16 UTC; 72 points) (
- Circling as Cousin to Rationality by 1 Jan 2020 1:16 UTC; 72 points) (
- Call For Distillers by 6 Apr 2022 3:03 UTC; 70 points) (EA Forum;
- Practical Advice Backed By Deep Theories by 25 Apr 2009 18:52 UTC; 70 points) (
- A List of Nuances by 10 Nov 2014 5:02 UTC; 67 points) (
- What cognitive biases feel like from the inside by 27 Jul 2022 23:13 UTC; 33 points) (EA Forum;
- Noise on the Channel by 2 Jul 2020 1:58 UTC; 31 points) (
- Conceptual Specialization of Labor Enables Precision by 8 Jun 2015 2:11 UTC; 30 points) (
- [Review] Meta-Honesty (Ben Pace, Dec 2019) by 10 Dec 2019 0:37 UTC; 29 points) (
- 24 May 2023 13:10 UTC; 27 points) 's comment on ChanaMessinger’s Quick takes by (EA Forum;
- 3 Jan 2020 1:12 UTC; 23 points) 's comment on Meta-discussion from “Circling as Cousin to Rationality” by (
- 25 Aug 2013 19:00 UTC; 21 points) 's comment on Reality is weirdly normal by (
- 15 Sep 2013 13:28 UTC; 20 points) 's comment on College courses versus LessWrong by (
- Course selection based on instructor by 8 Sep 2013 19:13 UTC; 16 points) (
- 3 Jan 2020 0:13 UTC; 16 points) 's comment on Meta-discussion from “Circling as Cousin to Rationality” by (
- Chains, Bottlenecks and Optimization by 21 Jul 2020 2:07 UTC; 14 points) (
- 12 Jan 2019 3:47 UTC; 12 points) 's comment on Combat vs Nurture & Meta-Contrarianism by (
- 23 May 2022 10:19 UTC; 11 points) 's comment on PSA: The Sequences don’t need to be read in sequence by (
- [SEQ RERUN] Double Illusion of Transparency by 7 Oct 2011 5:24 UTC; 11 points) (
- 24 Jul 2009 7:39 UTC; 10 points) 's comment on Missing the Trees for the Forest by (
- [Pile of links] Miscommunication by 21 Feb 2012 22:02 UTC; 10 points) (
- Luminosity (Twilight fanfic) Part 2 Discussion Thread by 25 Oct 2010 23:07 UTC; 9 points) (
- 20 Jan 2010 17:10 UTC; 8 points) 's comment on That Magical Click by (
- 8 Oct 2010 16:34 UTC; 8 points) 's comment on How to better understand and participate on LW by (
- 21 Dec 2021 13:33 UTC; 7 points) 's comment on High School Seniors React to 80k Advice by (EA Forum;
- 3 Feb 2015 3:09 UTC; 7 points) 's comment on Rationality Quotes Thread February 2015 by (
- 27 Mar 2020 19:00 UTC; 6 points) 's comment on Alignment as Translation by (
- 25 May 2022 23:00 UTC; 6 points) 's comment on How to get people to produce more great exposition? Some strategies and their assumptions by (
- 2 Jan 2020 20:54 UTC; 6 points) 's comment on Meta-discussion from “Circling as Cousin to Rationality” by (
- 1 Mar 2012 10:59 UTC; 5 points) 's comment on My summary of Eliezer’s position on free will by (
- 28 Jun 2013 10:42 UTC; 5 points) 's comment on Bad Concepts Repository by (
- Como são vieses cognitivos vistos por dentro? by 20 Jul 2023 18:48 UTC; 4 points) (EA Forum;
- [Opzionale] Come percepiamo i nostri bias cognitivi by 23 Dec 2022 1:49 UTC; 4 points) (EA Forum;
- 25 Mar 2013 16:34 UTC; 4 points) 's comment on Rationality Quotes March 2013 by (
- 30 May 2022 3:39 UTC; 4 points) 's comment on Ruling Out Everything Else by (
- 4 Aug 2021 21:04 UTC; 3 points) 's comment on How Do AI Timelines Affect Giving Now vs. Later? by (EA Forum;
- 25 Apr 2014 12:50 UTC; 3 points) 's comment on Rationality Quotes April 2014 by (
- 24 May 2011 18:15 UTC; 2 points) 's comment on Ace Attorney: pioneer Rationalism-didactic game? by (
- 14 Feb 2022 17:36 UTC; 2 points) 's comment on Epistemic Legibility by (
- 28 Nov 2013 6:40 UTC; 2 points) 's comment on The Relevance of Advanced Vocabulary to Rationality by (
- 認知バイアスは内部からどのように感じられるか by 17 Jul 2023 17:14 UTC; 1 point) (EA Forum;
- 11 Feb 2010 5:34 UTC; 1 point) 's comment on Open Thread: February 2010 by (
- 18 Nov 2010 16:25 UTC; 1 point) 's comment on Theoretical “Target Audience” size of Less Wrong by (
- 29 Nov 2015 22:57 UTC; 0 points) 's comment on My Coming of Age as an EA: 12 Problems with Effective Altruism by (EA Forum;
- 17 Aug 2011 11:45 UTC; 0 points) 's comment on Are Deontological Moral Judgments Rationalizations? by (
- 6 Jun 2009 19:40 UTC; 0 points) 's comment on Mate selection for the men here by (
- 2 Feb 2014 14:10 UTC; 0 points) 's comment on Self-Study Questions Thread by (
- 5 Sep 2016 9:24 UTC; 0 points) 's comment on Open Thread, Aug 29. - Sept 5. 2016 by (
- 15 Dec 2011 9:21 UTC; -4 points) 's comment on Video Q&A with Singularity Institute Executive Director by (
Eliezer, the so-called “expert blind spot” is IMHO one of the most important problems in lecture-based education and even scientific communication. One of my dreams is to make AI that helps address this, via cognitive models of both experts and novices. This is one of my holy grails. The AI would understand what was said, and “translate” the message to each novice individually, taking advantage of their pre-existing knowledge. In some cases, this “translation” would involve lengthy tutoring with new concepts and knowledge.
Yes the dependancies need to bee built first. I like this idea of a package management system for humans.
Physicists can’t just say, “What we do is beyond your comprehension, foolish mortal” … “if you really understand something in physics you should be able to explain it to your grandmother.” … I was shocked to discover it wasn’t true.
This seems inconsistent with the rest of what your post, which argues that you can explain physics to grandmothers or Congressmen, as long as you have the opportunity to iterate back and forth to verify understanding on both sides. While the practical implications of both may be the same (“It’s not worth the time to try”), there’s a big difference between “not worth doing” and “impossible”.
Well, yes, you can explain physics to sufficiently intelligent and diligent grandmothers or Congresspersons, over the course of months using adequate textbooks, lectures, and homework exercises.
It seems, then, that the goal is to motivate (and hence emotionally reward) diligence and intelligence.
It occurs to me that explaining things to people who “don’t get it” is often actually a matter of them not wanting to get it—but being polite enough to feign interest (even to themselves) all the way through the conversation. Most likely, their “wanting to get it” is a “belief in belief”—it’s part of their identity and personal integrity that they wish to listen to evidence, but in reality their emotional brain is sending out “what is the point of paying attention to any of this gibberish?” signals, and so they let their mind drift off to other things while they “try” to follow along. They likely do not even realize they are doing this.
This is consistent with my own anecdata, which is that engaging people emotionally before you start explaining something to them, and genuinely praising them—without being condescending—each time they reach a milestone along the path towards understanding your explanation, tends to have a much higher chance of succeeding in them “grasping” the explanation and actually attempting to incorporate it into their world-view.
The problem I’m currently working on, is that when they do attempt to incorporate it, if it winds up causing cognitive dissonance with something else that’s already in their world-view, they will often become irrationally hostile to me for having “slipped in” an “enemy soldier”.
Grandmothers? Maybe. I would hazard that most congressman have already had a few years practice at replacing textbooks, lectures and homework with keggers and fratboy misbehaviour, back in their formative years.
It’s like the way someone said that good thinking is to hold two diametrically opposite thoughts at the same time but still continue with whatever we’re doing.
When asked to explain in a few words what he had accomplished, Feynman said, “Buddy, if I could tell you in a minute what I had done, it would not be worth the Nobel Prize.” Though not exactly it’s kind of contradictory to what he said about explaining Physics to your grandmother. He also says he wasn’t able to explain what he did to his father.
Feynman said “The first principle is that you must not fool yourself, and you are the easiest person to fool.” In a book ‘Some time with Feynman’, when he talks about working on problems he says you have to fool yourself. It’s in a different sense. He says that when you are attacking a formidable problem you may doubt how you’ll be able to solve something where others haven’t been able to. But then you fool yourself saying you’re kind of special and you’ll be able to solve and keep working on it.
Usually, what is said taken out of context or only one side of it is taken, as most people want to take things to be either black or white, when most things are in varying shades of gray.
over the course of months
Doubtless. But you may be able to pare it down to something at the same time manageable in a short time and yet neither trite nor vague. I think Feynman did this in the book QED.
There are indeed cases where it does take someone who understands the subject, months of iteration to explain it. But when someone says, “I can’t explain it to you,” that can either mean:
a) the time to do so is genuinely cost-prohibitive or the listener very stupid
OR
b) the would-be explainer doesn’t really understand it and has been operating in a sort of “Chinese room”, manipulating symbols without understanding their connection to everything else.
In my experience, a) is the exception, not the rule.
It’s not clear why it isn’t true as originally intended. Books are enough for understanding anything, you’d just need good from-the-ground-up textbooks and probably months or years to read them. Teachers are out of this loop, and from personal experience I see teacher-mediated learning as inefficient, given motivated student and availability of good textbooks.
Universities institutionalize the very process of learning, which helps if motivation is weak and goal is not even on horizon, and as a result universities supply bigger amount of trained people than would be possible by just printing good textbooks.
In practice, this isn’t true. Some people really do have trouble learning from books. Simply reading the book aloud to them is enough to overcome the block.
I don’t know where the problem originates, however. It seems strange to chalk it up to lack of motivation or stupidity, given the people I know.
In other words, books contain all of the knowledge necessary to understand anything but not everyone can pick up the understanding itself from a book. Why, I don’t know.
There’s one major difference: people can answer learner-generated questions and engage in conversation, while books cannot. Reading the book aloud to someone probably isn’t enough; reading it aloud and then having a Q&A session after (or better yet, during) can be a major improvement.
Is it sufficient to read the book aloud to them even if you don’t understand it yourself? If so why isn’t there a profession of ill-educated freelance book-readers?
Many tutors are more or less exactly that.
Really? One on one? I’ve certainly been to many ‘read-out-the-textbook’ lectures, but there’s a good point to those, which is why I went. One on one I’d feel very robbed.
What’s that?
You can ask questions from an expert on the fly.
That’s not enough to make me not hate lectures.
We must not overlook the number one reason something is difficult to explain- that is that what one is trying to explain is nonsense. (this is not specifically directed at anyone posting here)
douglas: I think that counts as a subset of my b), in that if it’s nonsense, the would-be explainer doesn’t understand it.
Silas- yes, good point, but an important subset in that the person attempting to do the explaining often overlooks it. When was the last time you were having trouble explaining or understanding something and you asked, “Is this just nonsense?”
douglas: Actually, for me that happens quite a bit when on the “having trouble understanding” side, but I’m just cynical like that. For example, I interrogate people in online discussions about the difference in meaning between “Sony’s problem was setting the PS3′s price point too high” and “Sony’s problem was setting the PS3′s price too high.” (Yes, I know what a price point is, but it doesn’t seem to affect their statement.)
OTOH, when on the “having trouble explaining” side, I often do find gaps in my knowledge that force me to concede I don’t really understand the topic, in that sense, “overlooking” the possibility it’s nonsense.
douglas: Actually, for me that happens quite a bit when on the “having trouble understanding” side, but I’m just cynical like that. For example, I interrogate people in online discussions about the difference in meaning between “Sony’s problem was setting the PS3′s price point too high” and “Sony’s problem was setting the PS3′s price too high.” (Yes, I know what a price point is, but it doesn’t seem to affect their statement.)
OTOH, when on the “having trouble explaining” side, I often do find gaps in my knowledge that force me to concede I don’t really understand the topic, in that sense, “overlooking” the possibility it’s nonsense.
Silas- I like your example of interrogation. You rabble rouser, and I say that with utmost respect and love. I’ve had to throw out a couple of deeply cherished beliefs in my time, and it can be brutal. I try to go back to the question, “What does the evidence indicate?”, and then I have to be willing to look at evidence that I had neglected because I was to fixed or bias to consider it. I must admit, when I look at the state of the world, I don’t have a hard time believing that much of what currently passes as sense is actually nonsense. Ya know?
For the record, Sony’s biggest problem is the lack of a killer app for the PS3. When Final Fantasy 13 or Metal Gear Solid 4 are finished, we might just see actual PS3 sales. (Or so I believe, extrapolating from my own behavior; I don’t buy a system unless there is a game for it that I want to play.)
I like to think I’m pretty good at explaining things; it is easier to explain when you have back-and-forth feedback than when you’re writing a textbook (because when somebody doesn’t get something, you can keep throwing words at the topic until something sticks) but sometimes all you have is one shot...
Feynman didn’t understand physics. Which isn’t particularly shameful, since no one else understands physics either.
I had a similar run-in when I tried to go through the Cantor Diagonal Argument with a bunch of gifted 13-year-olds. I thought I had them following right through to the end, but when I came to the conclusion, they cried: “But infinity is infinity!”
Not quite as concrete as Bayesian inference, but it’s still a difficult concept. Some of those students would probably never think of that lecture again, and some, after some years of ruminating and/or majoring in math, would finally understand what I was getting at. After having that run-in, I actually switched over to teaching conditional probability (in particular, the Mony Hall problem) as my “fun” math lecture.
But you may be able to pare it down to something at the same time manageable in a short time and yet neither trite nor vague. I think Feynman did this in the book QED.
QED was one of my favorite books when I was nine years old. I was shocked when I grew up and read The Feynman Lectures and realized that QED hadn’t taught me a single bit of physics.
How do you know QED didn’t teach you a single bit of physics?
If you assimilated the corresponding bits of the Feynman lectures (or any other physics you encountered along the way) at all more easily for having read QED at age 9, then it did teach you some physics, albeit in a sense hard to quantify.
If reading its hand-waving stuff about light taking all possible paths at once increased the probability you’d have assigned to (say) something like the Bohm-Aharonov effect if anyone had thought to ask you how likely you thought it, then it did teach you some physics, even in the “Technical” sense. (Whether more or less than one bit depends on how much that probability increased.)
If having notions like path integrals, phase, and stationary action waved at you unintimidatingly didn’t push your thinking about physics in the direction of clearer understanding, then it seems that you were either (1) already implausibly acquainted with them for even an extraordinarily bright 9-year-old, or (2) implausibly impervious to such things for someone capable of reading and enjoying QED. Of course, something could be implausible to me but still true.
I see QED as a bit like stating the axioms of a mathematical theory. You can, in principle, derive the whole theory from the axioms, but in practice it takes generations of ingenuity to come up with the tools to do that. We take courses in mathematics not just to learn the axioms, but also, and primarily, to learn the vast library of tricks that let us do something useful with the axioms.
Similarly, I remember my first or second physics course, either mechanics or electromagnetism. The inside of the cover had, as I recall, all the “axioms”, the fundamental laws from which everything could be derived. Those fit inside the cover. But, just as in a mathematical subject, the main body of the subject was the library of tricks that let us actually make specific predictions from those fundamental laws.
Feynman, as I recall, was very up front in QED about what it did and did not contain. He was explicit about it not including the tricks that we would need to learn to apply the fundamental principles to real predictions about real situations.
However, I would not really call the book “vague” or even “hand-waving”, any more than I would call the inside cover of my physics textbook “hand-waving” or even “not physics”. It was seriously lacking, yes, admittedly so. But not at all in the way that, say, quantum mechanics popularizations typically are. Popularizations include neither the axioms (fundamental laws) of the theory, nor the tricks, but instead are filled with metaphor and impressionistic talk and not a small amount of pop philosophy. Not the same thing at all as QED (I mean QED the book, not the subject of quantum electrodynamics).
You know, I’m starting to suspect you were right the first time.
As much as I am enjoying the sequences, I can tell you right now that they are not written in a way that makes them very accessible to the layperson. The Simple Truth is pretty solid, but the rest of the foundations still haven’t come into focus for me. I can see the buildings and rooftops, but I still don’t feel like I have the foundations.
I am picking up more than enough knowledge and understanding to start filling the gaps myself but I still stop and wonder how tab A fits into slot B. Eventually it clicks and when it does I know how I could have said it to my past self in only a few sentences. But this is an unfair comparison for me to make against you.
The only reason I bring this up is because you seem very interested in explaining your ideas. You are doing great but I don’t think you are quite where you want to be yet. I have no problem with the quality of your work but, if you are anything like me, you would have a problem with it if you were given the chance to see it through my eyes. I suspect you would be surprised and say, “Oh, wow, that isn’t what I intended at all.”
This is why I have been adding my thoughts on the older articles as I read them. I am hoping that that somehow my first exposure can be useful feedback for you. (Well, that and talking about it helps me remember things more accurately.)
In any case, thank you very much for the hard work. I am at 140 of 584 on the list and am looking forward to the rest.
--Heuer, Psychology of Intelligence Analysis, chapter 12 (very good book; recommended)
Curiously enough, there is a recording of an interview with him where he argues almost exactly the opposite, namely that he can’t explain something in sufficient detail to laypeople because of the long inferential distance.