Some time ago, I had a simple insight that seems crucial and really important, and has been on my mind a lot. Yet at the same time, I’m unable to really share it, because on the surface it seems so obvious as to not be worth stating, and very few people would probably get much out of me just stating it. I presume that this an instance of the Burrito Phenomenon:
While working on an article for the Monad.Reader, I’ve had the opportunity to think about how people learn and gain intuition for abstraction, and the implications for pedagogy. The heart of the matter is that people begin with the concrete, and move to the abstract. Humans are very good at pattern recognition, so this is a natural progression. By examining concrete objects in detail, one begins to notice similarities and patterns, until one comes to understand on a more abstract, intuitive level. This is why it’s such good pedagogical practice to demonstrate examples of concepts you are trying to teach. It’s particularly important to note that this process doesn’t change even when one is presented with the abstraction up front! For example, when presented with a mathematical definition for the first time, most people (me included) don’t “get it” immediately: it is only after examining some specific instances of the definition, and working through the implications of the definition in detail, that one begins to appreciate the definition and gain an understanding of what it “really says.”
Unfortunately, there is a whole cottage industry of monad tutorials that get this wrong. To see what I mean, imagine the following scenario: Joe Haskeller is trying to learn about monads. After struggling to understand them for a week, looking at examples, writing code, reading things other people have written, he finally has an “aha!” moment: everything is suddenly clear, and Joe Understands Monads! What has really happened, of course, is that Joe’s brain has fit all the details together into a higher-level abstraction, a metaphor which Joe can use to get an intuitive grasp of monads; let us suppose that Joe’s metaphor is that Monads are Like Burritos. Here is where Joe badly misinterprets his own thought process: “Of course!” Joe thinks. “It’s all so simple now. The key to understanding monads is that they are Like Burritos. If only I had thought of this before!” The problem, of course, is that if Joe HAD thought of this before, it wouldn’t have helped: the week of struggling through details was a necessary and integral part of forming Joe’s Burrito intuition, not a sad consequence of his failure to hit upon the idea sooner.
I’m curious: do others commonly get this feeling of having finally internalized something really crucial, which you at the same time know you can’t communicate without spending so much time as to make it not worth the effort? I seem to get one such feeling maybe once a year or a couple.
To clarify, I don’t mean simply the feeling of having an intuition which you can’t explain because of overwhelming inferential distance. That happens all the time. I mean the feeling of something clicking, and then occupying your thoughts a large part of the time, which you can’t explain because you can’t state it without it seeming entirely obvious.
(And for those curious—what clicked for me this time around was basically the point Eliezer was making in No Universally Compelling Arguments and Created Already in Motion, but as applied to humans, not hypothetical AIs. In other words, if a person’s brain is not evaluating beliefs on the basis of their truth-value, then it doesn’t matter how good or right or reasonable your argument is—or for that matter, any piece of information that they might receive. And brains can never evaluate a claim on the basis of the claim’s truth value, for a claim’s truth value is not a simple attribute that could just be extracted directly. This doesn’t just mean that people might (consciously or subconsciously) engage in motivated cognition—that, I already knew. It also means that we ourselves can never know for certain whether hearing the argument that should convince us if we were perfect reasoners will in fact convince us, or whether we’ll just dismiss it as flawed for basically no good reason. )
Yes, I think I know what you mean. I hit that roadblock just about every time I try to explain math concepts to my little brother. It’s not so much that he doesn’t have enough background knowledge to get what I’m saying, as that I already have a very specific understanding of math built up in my head in which half of algebra is too self-evident to break down any further.
Some time ago, I had a simple insight that seems crucial and really important, and has been on my mind a lot. Yet at the same time, I’m unable to really share it, because on the surface it seems so obvious as to not be worth stating, and very few people would probably get much out of me just stating it. I presume that this an instance of the Burrito Phenomenon:
I’m curious: do others commonly get this feeling of having finally internalized something really crucial, which you at the same time know you can’t communicate without spending so much time as to make it not worth the effort? I seem to get one such feeling maybe once a year or a couple.
To clarify, I don’t mean simply the feeling of having an intuition which you can’t explain because of overwhelming inferential distance. That happens all the time. I mean the feeling of something clicking, and then occupying your thoughts a large part of the time, which you can’t explain because you can’t state it without it seeming entirely obvious.
(And for those curious—what clicked for me this time around was basically the point Eliezer was making in No Universally Compelling Arguments and Created Already in Motion, but as applied to humans, not hypothetical AIs. In other words, if a person’s brain is not evaluating beliefs on the basis of their truth-value, then it doesn’t matter how good or right or reasonable your argument is—or for that matter, any piece of information that they might receive. And brains can never evaluate a claim on the basis of the claim’s truth value, for a claim’s truth value is not a simple attribute that could just be extracted directly. This doesn’t just mean that people might (consciously or subconsciously) engage in motivated cognition—that, I already knew. It also means that we ourselves can never know for certain whether hearing the argument that should convince us if we were perfect reasoners will in fact convince us, or whether we’ll just dismiss it as flawed for basically no good reason. )
Yes, I think I know what you mean. I hit that roadblock just about every time I try to explain math concepts to my little brother. It’s not so much that he doesn’t have enough background knowledge to get what I’m saying, as that I already have a very specific understanding of math built up in my head in which half of algebra is too self-evident to break down any further.