Understanding your understanding
Related to: Truly Part of You, A Technical Explanation of Technical Explanation
Partly because of LessWrong discussions about what really counts as understanding (some typical examples), I came up with a scheme to classify different levels of understanding so that posters can be more precise about what they mean when they claim to understand—or fail to understand—a particular phenomenon or domain.
Each level has a description so that you know if you meet it, and tells you what to watch out for when you’re at or close to that level. I have taken the liberty of naming them after the LW articles that describe what such a level is like.
Level 0: The “Guessing the Teacher’s Password” Stage
Summary: You have no understanding, because you don’t see how any outcome is more or less likely than any other.
Description: This level is only included for comparison—to show something that is not understanding. At this point, you have, a best, labels that other people use when describing the phenomenon. Maybe you can even generate the appearance of understanding on the topic. However, you actually have a maximum entropy probability distribution. In other words, nothing would surprise you, no event is more or less likely to happen, and everything is consistent with what you “know” about it. No rationalist should count this as an understanding, though it may involve knowledge of the labels that a domain uses.
Things to watch out for: Scientific-sounding terms in your vocabulary that don’t correspond to an actual predictive model; your inability to say what you expect to see, and what you would be surprised by.
Level 1: The “Shut up and Calculate” Stage
Summary: You can successfully predict the phenomenon, but see it as an independent, compartmentalized domain.
Description: This is where you can predict the phenomenon, using a generative model that tells you what to expect. You are capable of being surprised, as certain observations are assigned low probability. It may even be tremendously complicated, but it works.
Though low on the hierarchy, it’s actually a big accomplishment in itself. However, when you are at this stage, you see its dynamics as being unrelated to anything else, belonging to its own domain, following its own rules. While it might have parallels to things you do understand, you see no reason why the parallel must hold, and therefore can’t reason about how extensive that relationship is.
Things to watch out for: Going from “It just works, I don’t know what it means” to “it doesn’t mean anything!” Also, becoming proud of your ignorance of its relationship to the rest of the world.
Level 2: The “Entangled Truths” Stage. (Alternate name: “Universal Fire”.)
Summary: Your accurate model in this domain has deep connections to the rest of your models (whether inferential or causal); inferences can flow between the two.
Description: At this stage, your model of the phenomenon is also deeply connected to your model of everything else. Instead of the phenomenon being something with its own set of rules, you see how its dynamics interface with the dynamics of everything else in your understanding. You can derive parameters in this domain from your knowledge in another domain; you can explain how they are related.
Note the regression here: you meet this stage when your model for the new phenomenon connects to your model for “everything else”. So what about the first “everything else” you understood (which could be called your “primitively understood” part of reality)? This would be the instinctive model of the world that you are born with: the “folk physics”, “folk psychology”, etc. Its existence is revealed in such experiments as when babies are confused by rolling balls that suddenly violate the laws of physics.
This “Level 2” understanding therefore ultimately connects everything back to your direct, raw experiences (“qualia”) of the world, but, importantly, is not subordinate to them – optical illusions shouldn’t override the stronger evidence that proves to you it’s an illusion.
Things to watch out for: Assuming that similar behavior in different domains (“surface analogies”) is enough to explain their relationship. Also, using one intersection between multiple domains as a reason to immediately collapse them together.
Level 3: The “Truly Part of You” Stage
Summary: Your models are such that you would re-discover them, for the right reasons, even they were deleted from your memory.
Description: At this stage, not only do you have good, well-connected models of reality, but they are so well-grounded, that they “regenerate” when “damaged”. That is, you weren’t merely fed these wonderful models outright by some other Really Smart Being (though initially you might have been), but rather, you also consistently use a reliable method for gaining knowledge, and this method would eventually stumble upon the same model you have now, no matter how much knowledge is stripped away from it.
This capability arises because your high understanding makes much of your knowledge redundant: knowing something in one domain has implications in quite distant domains, leading you to recognize what was lost – and your reliable methods of inference tell you what, if anything, you need to do to recover it.
This stage should be the goal of all rationalists.
Things to watch out for: Hindsight bias: you may think you would have made the same inferences at a previous epistemic state, but that might just be due to already knowing the answers. Also, if you’re really at this stage, you should have what amounts to a “fountain of knowledge” – are you learning all you can from it?
In conclusion: In trying to enhance your own, or someone else’s, understanding of a topic, I recommend identifying which level you both are at to see if you have something to learn from each other, or are simply using different standards.
- Join the Special Relocation Task Force! by 24 Apr 2011 21:26 UTC; 46 points) (
- Less Wrong Book Club and Study Group by 9 Jun 2010 17:00 UTC; 43 points) (
- Biases of Intuitive and Logical Thinkers by 13 Aug 2013 3:50 UTC; 34 points) (
- 18 Nov 2010 14:36 UTC; 25 points) 's comment on “Target audience” size for the Less Wrong sequences by (
- 1 Apr 2015 13:07 UTC; 19 points) 's comment on How has lesswrong changed your life? by (
- Mental Models by 28 Mar 2010 15:55 UTC; 18 points) (
- Why learning programming is a great idea even if you’d never want to code for a living by 28 Sep 2010 16:51 UTC; 16 points) (
- 28 Jun 2011 22:50 UTC; 14 points) 's comment on Learning how to explain things by (
- 13 Jul 2010 21:52 UTC; 12 points) 's comment on Room for rent in North Berkeley house by (
- The Boundaries of Biases by 1 Dec 2010 0:43 UTC; 12 points) (
- 28 Jun 2011 1:36 UTC; 10 points) 's comment on Learning how to explain things by (
- 4 Mar 2011 3:42 UTC; 10 points) 's comment on What are you working on? by (
- 29 Nov 2010 4:15 UTC; 9 points) 's comment on Belief in Belief vs. Internalization by (
- 16 May 2010 13:07 UTC; 9 points) 's comment on The Social Coprocessor Model by (
- 30 Mar 2010 18:43 UTC; 9 points) 's comment on Even if you have a nail, not all hammers are the same by (
- 1 Aug 2010 19:25 UTC; 8 points) 's comment on Open Thread, August 2010 by (
- 4 Aug 2011 17:05 UTC; 8 points) 's comment on Martinenaite and Tavenier on cryonics by (
- 16 Oct 2010 7:14 UTC; 8 points) 's comment on Discuss: Original Seeing Practices by (
- 22 Apr 2010 17:56 UTC; 8 points) 's comment on Attention Lurkers: Please say hi by (
- 9 Jun 2010 16:06 UTC; 7 points) 's comment on Open Thread June 2010, Part 2 by (
- 10 Aug 2010 2:59 UTC; 7 points) 's comment on Five-minute rationality techniques by (
- 6 Jan 2012 17:54 UTC; 7 points) 's comment on Explained: Gödel’s theorem and the Banach-Tarski Paradox by (
- 18 Sep 2010 23:20 UTC; 7 points) 's comment on Open Thread, September, 2010-- part 2 by (
- 28 Apr 2015 3:47 UTC; 6 points) 's comment on Open Thread, Apr. 27 - May 3, 2015 by (
- 13 Apr 2011 17:39 UTC; 6 points) 's comment on On Debates with Trolls by (
- 1 Sep 2010 15:22 UTC; 6 points) 's comment on Rationality quotes: September 2010 by (
- 23 Sep 2010 17:47 UTC; 6 points) 's comment on Less Wrong Should Confront Wrongness Wherever it Appears by (
- 24 Mar 2012 13:50 UTC; 5 points) 's comment on What epistemic hygiene norms should there be? by (
- 2 Jul 2010 16:24 UTC; 5 points) 's comment on Open Thread: July 2010 by (
- 2 Aug 2010 1:15 UTC; 4 points) 's comment on Open Thread, August 2010 by (
- 31 Mar 2010 1:47 UTC; 4 points) 's comment on Open Thread: March 2010, part 3 by (
- 8 Dec 2010 21:41 UTC; 4 points) 's comment on Were atoms real? by (
- 23 Jul 2010 4:53 UTC; 3 points) 's comment on Book Review: The Root of Thought by (
- 18 Jan 2011 0:31 UTC; 3 points) 's comment on The Best Textbooks on Every Subject by (
- 6 May 2010 22:16 UTC; 2 points) 's comment on The Cameron Todd Willingham test by (
- 25 Mar 2010 0:05 UTC; 2 points) 's comment on Undiscriminating Skepticism by (
- 29 Nov 2010 4:39 UTC; 2 points) 's comment on Belief in Belief vs. Internalization by (
- 14 Jul 2010 22:28 UTC; 2 points) 's comment on How to always have interesting conversations by (
- 20 Nov 2011 16:12 UTC; 2 points) 's comment on Connecting Your Beliefs (a call for help) by (
- 10 May 2010 15:00 UTC; 1 point) 's comment on The Psychological Diversity of Mankind by (
- 8 Apr 2010 23:33 UTC; 1 point) 's comment on Open Thread: April 2010, Part 2 by (
- 8 Feb 2011 18:02 UTC; 1 point) 's comment on SUGGEST and VOTE: Posts We Want to Read on Less Wrong by (
- 20 Mar 2011 16:38 UTC; 1 point) 's comment on What is wrong with mathematics education? by (
- 16 May 2012 14:12 UTC; 1 point) 's comment on Open Thread, May 16-31, 2012 by (
- 16 Jun 2010 17:08 UTC; 1 point) 's comment on Open Thread June 2010, Part 3 by (
- 5 Oct 2010 3:50 UTC; 1 point) 's comment on The Irrationality Game by (
- 2 Apr 2011 21:39 UTC; 0 points) 's comment on Describing how much attention you’ve paid to an argument by (
- 21 Apr 2011 21:35 UTC; 0 points) 's comment on Being a teacher by (
- 17 May 2011 19:34 UTC; 0 points) 's comment on The elephant in the room, AMA by (
- 9 Dec 2010 19:39 UTC; 0 points) 's comment on Were atoms real? by (
- 26 Mar 2010 12:42 UTC; 0 points) 's comment on The mathematical universe: the map that is the territory by (
- 30 Oct 2010 17:59 UTC; 0 points) 's comment on META: Who Have You Told About LW? by (
- 18 Jun 2010 21:08 UTC; 0 points) 's comment on Open Thread June 2010, Part 3 by (
- 14 May 2010 15:42 UTC; 0 points) 's comment on Updating, part 1: When can you change your mind? The binary model by (
- 29 Mar 2010 23:00 UTC; 0 points) 's comment on Mental Models by (
- 28 Sep 2010 20:54 UTC; 0 points) 's comment on Open Thread September, Part 3 by (
- 15 Jun 2010 18:54 UTC; -1 points) 's comment on How to always have interesting conversations by (
- 15 Feb 2014 17:27 UTC; -2 points) 's comment on Rethinking Education by (
Excellent post, I like the breakdown. So let’s take a simple example for clarification:
“Why do race cars sound high-pitched when they are coming toward you and low pitched when they’re heading away?”
Level 0: “That’s because of the Doppler effect.”
Level 1: “The frequency is f = (v + vr)/(v+vs) * f0 . Make sure you get the signs right.”
Level 2: “Sound waves have a wavelength, but the wavelength is shortened when the car is coming toward you, because the car ‘catches up’ with its own waves, and lengthened when it’s heading away. Since the waves travel at a constant speed in the medium, a short wavelength implies a high frequency, and frequency is what we hear as pitch. Yes, I know the equation. The phenomenon also happens for light, although the equation is different.”
Level 3: “No, I don’t remember the equation. Here, gimme a minute and I’ll derive it.”
Well done. Coincidentally, I think my first frustration regarding (different meanings of) understanding was about that exact issue, back in high school. When my friends and I had learned about the Doppler effect, we were all at Level 1, maybe with minor inroads into Level 2. The problem? I classified that as “not understanding”, while my friends called it “complete understanding”. So we had conversations like this:
me: Okay, I don’t really get this Doppler effect. I mean, why should coming closer make the pitch higher?
them: Because the equation says...
me: Well, I know the equation, but I don’t get it at the gut level. What is it about moving closer that shortens the waves like that?
them: Because it like, compresses the waves.
me: But why should the sound wave act like some stiff rod connecting me to the car? Why do you get to apply that kind of reasoning to sound waves?
them: Because that’s what the equations say!
me: Argh!
Btw, another important thing about Level 3 is that you could change the question to ”… low-pitched when they are coming toward you …” and vice versa, and the Level 3 rationalist would still derive the same result, except to add that, “wait, I’m confused—are you sure you’re reporting that right?”
It seems to me that this is usually the answer to questions about quantum mechanics. Does that mean most people (including physicists) understand it at Level 1?
I had a friend express frustration with me once, because we would have conversations about some subjects she felt she understood, in which I would say things like “I don’t understand X”, and then proceed to demonstrate what she felt was a better understanding than she had. I think it felt to her like I was “sandbagging” when claiming I didn’t understand things, whereas I was merely expressing that I was somewhere between levels 1 and 2 and was unsatisfied with it.
Do you remember what insight helped you to overcome these questions? My experience with Level 1 → Level 2 transitions is that I somehow mysteriously got used to the phenomenon, without knowing exactly how that happened. Also, I am not sure how could I explain the Doppler effect to somebody at Level 1, or answer questions such as the ones above. It seems that explanations reliably work up to Level 1 only.
I suspect the rationalist would be already confused at Level 1, if he got the signs right.
I don’t remember when I finally got a “Level 2” answer, but to move someone else in that direction, I would explain it this way: First, make sure they understand what’s actually happening in a compression wave in air. Help visualize it with a slinky if necessary. Then say,
“The sensation of sound comes from when your ears recognize a quick sequence of compressed-air, less-compressed-air, compressed-air, less-compressed-air, etc. And the rate at which this sequence cycles determines the pitch you hear, with quicker cycling meaning a higher pitch.
“If the source of the sound isn’t just standing still, but moving toward you, then each compression it makes of the air happens at a point where it is closer to you that it was before, so that bit of compressed air hits you sooner than otherwise. So as the compressed/less-compressed groups reach you, they cycle through faster, which you experience as a higher pitch, for the same reason you’d experience anything as a higher pitch.”
(Someone let me know if I’m seriously off; it’s been a while.)
My first instinct would have been something like that, but on second thought, I’d start with a example of a boat moving in water and the waves it makes, maybe drawing a picture and ask them to visualize it. This is admittedly very crude and inaccurate, but gives a good overview of the phenomenon. After that I’d elaborate on the differences of surface waves vs. pressure waves, wavelength & frequency and anatomy of hearing etc.
Generally speaking (and not directed against anything anyone has said): give the explainee an intuitive framework to hang details on, don’t pour a litany of seemingly unconnected facts. Just make sure he doesn’t confuse the crude framework for the actual phenomenon.
(...And more generally: of course, the best would be to explain in a mode that is natural for the individual… for me (and, I assume, quite a few others) it’s visuality & real-world analogies.)
(And hello, everyone. First post.)
Welcome to Less Wrong! Feel free to introduce yourself on that thread. Here’s the rest of RobinZ’s newcomer welcome package.
And thanks for the reply to my article and comment. I hope to have an article about how to explain up soon, which will expand on the ideas here (this thread and the article).
Some thoughts regarding the difference between level 2 and 3:
Seems like a level 3 understanding necessitates an insight-producing ability (i.e. ability to improve existing models) -- otherwise your models wouldn’t regenerate if destroyed. The question is why your insights with a level 2 understanding aren’t evidence of a level 3 understanding. Or whether it’s even possible to have insights with a level 2 understanding.
If we’re able to regenerate a model, we obviously have model-making abilities. But isn’t the same happening when you draw connections between your models? The moment you realize two or more models are connected, you’ve added to your model of reality. Neither model predicted their relationship with the other, your insight connected the two, improving your older model.
How’s level 3 different from level 2?
In short, if you are simply informed about the connections between the fields, you are at level 2, but if you could discover the links yourself with no hints, you are at level 3. For example, if you know how the parameter “speed of light”, c, has implications for both general relativity and quantum phenomena, you have a level 2 understanding (to the extent that these fields are involved), but if you couldn’t discover the need for a “speed of light” parameter, how to find it, and how it affects the disparate fields, you haven’t reached level 3.
Actually, this is a good example of why I don’t think this is really a linear hierarchy.
My understanding is like your description in level 2, but I don’t know the equation. I could probably derive it if I had enough time, knew the speed of sound in air, had a way to check that I could properly relate wavelength to frequency, and I could use a computer to check my results by programming it to simulate the sound of a constant-pitch siren as an ambulance passes by.
Is that level 2, or level 3? 2.8? I find the whole idea of levels rather confusing here.
This would suggest a typology of understanding: type N* understanding knows the passwords, type I understanding knows the equations (edit: or can otherwise find the answer—thanks, SilasBarta), type II understanding knows the connections to other fields of knowledge, and type III understanding generates type I and II knowledge within the domain and connections to other fields of knowledge (type II knowledge in other domains).
* Zero in roman numerals. Which is appropriate, of course, because the passwords are just names and can be wrong in a number of ways.
Very good, but I just want to emphasize that Level 1, as I’ve defined it, doesn’t necessary involve equations; it just means that you get the right answers somehow, as long as it’s not cheating. (So I think the “calculate” part in its name is a bit misleading and I should probably pick a different one.)
To put it another way: for purposes of determining whether you have type I understanding, ad hoc is okay, but post hoc is not.
I see—yes, that’s a good point. I’ll edit in a note.
(By the way, I switched from Arabic to Roman numerals to distinguish the typology from the hierarchy—it’s level 1 and type I because the two are related, not perfectly identical.)
Oops, I had thought to correct my first reference (which should have been “Level 1”), but only corrected it halfway! Fixed now.
The original draft of this made sure to note that a) the levels aren’t really discrete in that you can be e.g. partway/ halfway/ mostly toward completing Level 2; and that b) it’s conceivably possible to complete them out of order, but that should be extremely unusual. I decided that explaining all of that would be a distracting “caveat overload”
Also, I don’t think your situation counts as bypassing levels. Level 1, by design, doesn’t say that you must have the standard equation in your model, or that you could provide it right this second. It just requires that you have a model that works. So if you know enough to generate a predictive, successful model, then you’re at least at Level 1, even if you need some time to flesh out the specific predictions.
So maybe there should be level 2a: conceptual understanding, and level 2b: quantitative + conceptual understanding.
The fact that nobody quite falls perfectly into a discrete level doesn’t mean it isn’t a useful heuristic though. Even in your case you could say that you’re “on your way” to level 2.
Right—but that’s what I think is wrong with the definition of levels 2 and 3. Since I could get to the equation, if I had to, shouldn’t that also be a valid description of level 3? Requiring me to know the equation in order to be level 2, yet not requiring it at level 3, kind of makes the point that this is not really a linear progression.
[Edit to add: I don’t necessarily mean that understanding itself isn’t linear, just that this particular set of definitions does not seem to be.]
So what do you call it when you know the level-2 explanation but don’t know the equation and can’t derive it?
Deriving the frequency equation. I should do that some time.
I’d call it “time to dust off the math books”. Incidentally, I’ve got to do just that.
There could be an interesting parallel here with “levels of misunderstanding”. Perhaps it could help explain how difficult it is for a homeopathist or creationist to change their mind once they’ve hit the “engtangled truths” or “truly part of you” stages.
That’s probably closer to the truth than one might think. Once a belief system moves beyond rote memorization of its basic principles and becomes associated with other domains, non-rational beliefs can get very heavily embedded with outside belief networks. The feedback loop that can be created by having just a few anecdotal connections to an already established system would be severe.
The key factor is that, for people who are not strict rationalists already, the “correlation=causation” attitude is quite strong, so any neuronal links I make from new information to outside branches of knowledge can freely flow right back the way they came. Where the rationalist would have to find additional evidence to ingrain a belief, the fundamentalist is free to draw from his outside branches of knowledge to find reverse reinforcement to support the belief he’s trying to learn.
Of course, we all do this to a certain extent, bootstrapping our new, tenuous beliefs by looking for associations we can make to older, more familiar territory. But fundamentalists can get through the neuronal rut-treading faster than rationalists, allowing a belief system to become ingrained that much faster.
Also, part of rationalists’ training involves maintaining belief system elasticity, so we are ready to shift our conceptions as new information comes along. Fundamentalists, on the other hand, strive in exactly the reverse direction, wanting each neuronal pathway to be as unchanging as possible. There are two main reasons I can think of that this would be important: The one is that God’s morality is eternal and unchanging, so the closer we bring our thought patterns out of that messy doubting game, the closer we come to “perfection”. The other is that certain idea like adultery or homosexuality are expressly forbidden not to just do, but to even think about. What’s a person to do? Well, once you hit the Stage 3 described above, your neural pathways will just naturally flow in the proscribed direction, avoiding extraneous pitfalls that you’ve edited out.
I remember reading something about this stage with professional chess players a long time ago—a chess master simulates less possible moves in their head than a player with only moderate experience, because past a certain stage, their brain pathways have seen enough games that the obviously “bad” moves simply drop out of their neural net.
Charlie Parker echoed a similar thing about jazz:
“You’ve got to learn your instrument. Then, you practice, practice, practice. And then, when you finally get up there on the bandstand, forget all that and just wail.”
Unfortunately, the same neural embedding that makes great chess players and musicians possible, also makes cults and other forms of indoctrination possible.
Good point. That’s why I here argued against thinking about things too long. It’s even more important the less rational you are. Before you know it, you are past the point that any evidence can convince you that your opinion is wrong.
I think we could designate that as, say, Level (0+2i).
Hell’s bells, that’s a good idea! Let’s classify every belief as a complex number (magnitude 1) with a real and imaginary part!
Astrology: (0 + j1) - {imaginary but vaguely intuitive}
Aliens have visited earth: (2^-1/2 + j2^-1/2) - {intuitively possible, imaginary but with finite real component}
Michelson’s prediction of aether wind effect: (-1 + j0) - {simply, honourably wrong}
Elan vital: (0 - j1) - {”not even wrong”}
Do you work in some kind of engineering field or something where people regularly write i as “j” and coefficients to the right? Just curious.
Yeah, I’m an electrical engineer; “i” is our symbol for current, so we use j instead. As for writing it to the right or left, it’s a matter of taste as far as I know. I like it to the left because you’re immediately clued in that it’s an imaginary quantity.
I don’t see the point of restriction to magnitude one. And if you do want that, it’s much easier to just specify the phase angle.
True, but it obscures the imaginary vs. real distinction.
Also, this is a joke. I think.
I demand my jokes to be totally rigorous!
Yeah, it’s a joke, but it could also be a cute (and hence possibly mnemonic) classification scheme.
Fair enough. :) I do find when I hear a science related joke that I take about a minute to determine whether it’s “correct,” then laugh.
Best one I’ve ever heard (only works if you’ve taken a complex algebra course):
All jokes about quantum mechanics are automatically unfunny.
Actually, there are Poles in Western Europe, but they’re removable. ;)
(potentially offensive) So you can mathematically prove that Hitler destabilized Europe?
*searches Internet for removable poles*
Ha!
Edit: By the way—and I fully grant this may be obvious—“removable poles” is not a very good search term.
A plane is flying from Warsaw to Paris. The pilot announces that they are passing over Rotterdam, and the world’s largest container ship is visible out of the windows on the right side. Shortly afterward, the plane went into a tailspin and crashed.
A later analysis revealed that the crash occurred because all the Poles had moved into the right half-plane.
LOL.
The parent should be at 0, not −1. It’s perfectly okay to express something like “LOL” once in a while.
Of course it is—if you’re willing to take the karma hit for a comment that adds nothing to the conversation.
But it does add to the conversation, in the same way as karma does. It provides the author of the comment valuable feedback about how their comment was perceived. Yes, karma has a similar function, but we react more to written comments than abstract numbers.
The point was, there shouldn’t be a karma hit for “adding nothing to the conversation”. It should be okay to simply express a reaction without taking a karma hit. The score for “adding nothing” is 0; a negative score indicates that the comment subtracted something from the conversation. To downvote a comment is to actively discourage such a comment from being posted. I don’t think such comments should be actively discouraged.
Dilution of good content is subtraction, if not as bad as the addition of bad content. I really do have no desire to see a bare “LOL”, and will continue to vote accordingly.
There is no significant dilution occurring here. If we were flooded by “LOL” comments, or a particular user posted them with inappropriate frequency, that would be a different situation.
You are being too harsh. For my part, I have no desire to see this kind of non-niceness on here, just because we’re interested in high-quality content. It subtracts a lot more from the experience here than an occasional “LOL”.
Downvoting everything above this comment in this thread as a matter of principle.
Joking LOL.
I think this hierarchy can be derived from the way that I’ve developed for thinking about this problem—considering the person’s beliefs as a “memeplex” (“memotype”?). Replacing a few memes within a creationst’s head—even if the new memes are better—can significantly increase the net cognitive dissonance going on within their own skull and prompt them to reject facts as something that must have been tampered with, or therwise being somehow invalid, protecting their more self-consistent, incorrect model.
Once the memeplex reaches a stable local minimum region in its dissonance landscape (analagous to fitness landscape), true information can seem worse. A well integrated memplex would be “truly part of you”.
EDIT: I realize this analogy is at risk of noticing “surface analogies” between genetics and memetics, which I’ve just been warned against in the article. I don’t think this is the case, but I’ll leave the caveat that my understanding of this idea may be as low as level 2.
This should be easy to test: where can we find a research article that makes (and tests) a quantitative prediction based on rigorous memetics ? ;)
(I plead guilty to using the analogy myself.)
Just remembered something which either is a generally useful test for Level 3, or deserves a distinct level of its own: having the ability to guide someone through constructing the knowledge in the first place.
I’ve just spent the afternoon tutoring a friend of mine’s kid, eighth-grader age, who’s math-averse. This is a particularly good test, I find, of how well you understand a given bit of material. (And sadly, it seems that his teacher has no particular knack for explaining.)
To someone who already groks math a bit, something like factoring a sum of integers raised to some power is “intuitively obvious”. There is no need to spell out the component intuitions of algebra, so you can just say “find the common factor, divide each term in the original sum by that, and you’re done”.
When you find yourself explaining that to a math-challenged kid you realize that “find the common factor” isn’t an obvious, one-step operation, you have to slow way down and break it up: there are usually many different common factors but when the teachers says “the” what they mean is the greatest one, for instance, and finding a common factor might involve decomposing what is (in exercise form) initially expressed in a more compact form.
When you can’t rely on the kid’s mathematical intuition, you essentially have to give them an explicit algorithm for factoring expressions, much as if you were programming a computer to do it. (This is one of the reasons why I’m probably going to go ahead with my “programming as a useful rationalist skill” post at some point: the gist of it is “teaching a computer what you know about something is an excellent test of whether that knowledge is truly a part of you”.)
It’s worse than programming, since that kid also has negative feelings about math, plus some unnecessary baggage he got from his teacher that makes it more of a muddle than it should have been in the first place, all of which I must first clear out before we can build something correct.
Well put. I think that is a good test for Level 3, since it shows how well you can do when deprived of an arbitrary “tool”.
I’ve been on both ends of the situation you’ve described: in teaching, I’ve had to break down procedures into ever smaller substeps and tell students what to do if e.g. they don’t have their multiplication table or are dividing something that exceeds its bounds (I tutor 4th graders).
Oppositely, when being taught, I’ve been in situations where the instructor incorrectly assumes certain common knowledge and then can’t fill the gap because they have forgotten what it’s like to be with out it (not all have this failing, of course). It leaves me suspecting they never seriously thought about its grounding before.
Hm; offhand, I would think level 2 should be split up. There’s the level where you can see the analogies to other areas, but can’t formalize them—you can use them to reason analogically, but can’t be quite sure that what you’re doing make sense. Then there’s the level where you actually, well, understand the connections to other areas. Does this distinction still make sense outside of mathematics?
Well, like I clarified to pjeby, the levels aren’t intended to be discrete: you can be partway toward completion of one. But what you’ve described still fits right in with Level 1: until you know why the analogy holds, and therefore can correctly predict if any given analogical inference will hold, your’e still at Level 1.
That’s not to say that apparent analogies can’t be usefully suggestive, it’s just that they’re not a higher level of understanding.
Where in this system would you place a thorough and accurate, but superficial model that described the phenomenon? If I’ve made a lot of observations, collected a lot of data, and fit very good curves to it, I can do a pretty good job of predicting what’s going to happen—probably better than you, in a lot of cases, if you’re constrained by model that reflects a true understanding of what’s going on inside.
If we’re trying to predict where a baseball will land, I’m going to do better with my practiced curve-fitting than you are with your deep understanding of physics.
Or for a more interesting example, someone with nothing but pop-psychology notions of how the brain works, but lots of experience working with people, might do a far better job than me at modeling what another person will do, no matter how much neuroscience I study.
...to answer myself, I guess this could be seen as a variation on stage 1: you have a formula that works really well, but you can’t explain why it works. It’s just that you’ve created the formula yourself by fitting it to data, rather than being handed it by someone else.
[Edit: changed “non-generative” to “superficial”]
Yes, those are all examples of stage 1 -- where you have some system that gives answers, and works, even though you can’t say why. They are extensions of the “primitively understood” part of reality that I mention in Level 2 (but which counts as Level 1).
I don’t know why you say they’re not “generative”—when make a prediction with the black box inside you, you’ve generated a prediction.
However, as I mentioned in another comment, there can be partial progress on the levels. For example, if your experience with people allows you to make predictions in very different areas of human behavior, in such a way that the predictions relate to each other and have implications for each other, that would be progress into Level 2. (Though this would still be a shallow connection to your other models because it only connects to phenomena involving human behavior.)
Can we erase the relevant portion of the OPs memory, and see if he can re-derive these classifications?
What about fuzzy analogical understanding? This is understanding some process or even only with reference to an inexact metaphor: people who think of electricity as a fluid flowing through conductors, think of anatomy as if the body was a society and each part and industry or job, think of animal behavior in terms of the equivalent or similar behavior in humans, etc. This is extremely common in my experience.
I suppose it belongs somewhere between levels 1 and 2.
I’m putting in a kind word for “guessing the teacher’s password”. Sometimes it’s a useful preliminary to getting better understanding. In my case, especially if the “teacher” is stuff rather than a person, blundering around for a while gives me enough raw material to develop conscious theories.
Yes, actually the Level 0 of random guessing is not so bad. It implies that you are not systematically wrong.
In the language of compression, a Level 0 understanding would correspond to a naive or uniform encoding like PPM. Not great, but at least you’re not inflating the image.
Good point. As Eliezer Yudkowsky often notes, it’s possible to do a lot worse than maximum entropy guessing. So perhaps the negative levels are when your “understanding” is so bad, you would improve by random guessing. In practice, though, even this kind of performance has some good non-randomness to it. E.g., even when you guess while at a negative level, you don’t apply randomization at the level of letters, leading to guesses like “ghftklw” for an explanation of light.
Level 0 is knowledge, of a sort. I may not know or understand anything about (say) finite element methods, but I know that such a thing exists, roughly what domain it comes from and that I can, if necessary, locate and acquire more accurate knowledge about it.
Level 0 should perhaps be reserved for “unknown unknowns”, the things you don’t know you don’t know, where you don’t even know there’s a password to guess.
I may have been talking about a level at 1⁄2, where you don’t exactly know there’s a password, but you’re unsystematically collecting enough experience to have a basis for theories.
I would call unknown unknowns something like level −1.
Yes, it is a kind of knowledge—knowledge about what labels a specific group of people use in their domain, and so allows you to make predictions about what words they will use when speaking to each other, so it is a narrow kind of understanding. But it’s important not to equate “understanding of how people talk” with “understanding of the models they’re talking about”.
One reason I put this level in is so that you can recognize these “empty labels” as something that you need to fill in; i.e., you shouldn’t stop at “light is waves”, but follow through to finding out “what model of the world corresponds to light being waves?”
Teacher is always a person, since she creates or uses the terminology, from which the passwords are drawn. But otherwise agreed—knowing the passwords enables one to actually search for their meanings, and this takes one closer to the understanding. The password guesing level is better than nothing.
What I meant was that I have a better time with that preliminary stage if I’m doing something like trying to find a good combination of paper and ink for calligraphy than trying to satisfy a person.
When reading or watching Feynman, I really see the level-III understanding; it’s such a pleasure to see him go past all the formulae and really understand what’s going on.
The thing with physics is that it’s like a recursive level-III, where today’s level-III is like tomorrow’s level-II or even I.
The level analogy suggests that someone with level 2 understand would usually understand a subject better then a person with level 1 understanding.
Not all caluclations are equal and there will be people with level 1 understanding who can calculate and predict better then some people with level 2 or level 3 understanding.
I’ve been talking to a lot of recruiters as I interview for web developer positions. The recruiters don’t code, but they are very familiar with all the technologies and buzz words. That made me think of this article. They use the words and often do so in a sensible way, but they don’t have a clue as to why their sentences make sense.
Example: “Yes, we use Angular on the front end because it provides the tools to solve the problems we have.” That’s a sensible answer, but the recruiter doesn’t know:
What “the front end” is.
What the problems they’re having are.
What the tools that Angular provides are.
I’m not sure what level of understanding the recruiters have. I think it’s Level 1. They can often actually answer the questions… but don’t really know what they’re saying. When given an input (question), they can (sometimes) respond with the right output (answer), but they don’t know what the words they speak actually mean. I wouldn’t say that they have a Level 0 understanding. If that were the case, then they wouldn’t actually be able to answer questions about technologies.
I’m noticing a bit of confusion in myself though, and I don’t think I fully understand the difference between Level 0 and Level 1.
Warning, aiming high too frequently while young may be hazardous to your grades and hence instrumentally irrational.
The traditional way to get around that is “don’t let your schooling get in the way of your education.”
Meh. Grades aren’t all that important.
It seems to me that stage 3 just means that you use correct scientific methods to learn & expand your knowledge (or am I missing something ?). If that is correct, wouldn’t that mean you could essentially recreate the entire body of human knowledge given enough time & persistence ?
The only knowledge that seems absolutely essential to me then is the scientific method itself. Given my human psychology I’m reasonable certain that without that knowledge I would dream up an entire pantheon of gods to explain away everything and just stop there.
There is a far way from using correct methods to actually discovering something important. There are almost infinitely many ways you can apply the methods, so you have to know where to look to find out the desired answers. Also, “given enough time & persistence” is a phrase which can very easily be misleading. You are certainly not in phase 3 if you needed 10^15 years to discover the relevant fact.
To be in stage 3 with general relativity, to take a particular example, you have to be in the state that after it being deleted from your head, given a question “how to reconcile Lorentz transformations and gravity”, it would instantly appear to you that the solution has something to do with curved spacetime and general covariance. You needn’t to know how the solution looks like at the first moment, but the rederivation should appear to you as the most natural chain of inductions, without stopping at crossroads and randomly (or systematically) checking all possible ways forward, albeit using scientific method.
After all, there were lots of people in 1910s who knew the scientific method and all available data, but only one has discovered general relativity, and only few others, if anybody, were even close.
Actually, Stage 3 works as a standard for the scientific method as well. That is, if knowledge of that specific method were deleted from your mind, would you rediscover it? Do you have an epistemology that would come up with an idea like, “Hey, I need to check these general ideas I have against nature, to see if they really hold” without it having been revealed to you in advance?
Ideally, you’d come up with (or have to start from!) something even better: the Bayesian rationalist method, of which the scientific method is a crippled, special case. While science is better than superstition, it also permits slower updates than you can justify, and often allows certain kinds of evidence that you shouldn’t count.
However, if you found yourself in a role analogous to “being one-eyed in the land of the blind”, and others’ minds weren’t capable of following Bayesian rationality, then you may want to teach them the scientific method as a next-best epistemology.
What if “Bayesian rationality” were deleted?
How did “Bayesian rationality” get discovered, except by the usual practices of scientists? (I won’t say “the scientific method”, partly because it’s really fuzzy and so the “the” at the beginning of the phrase is deceptively concrete, and partly because I don’t think that the process is as tidy as descriptions of the scientific method make it out to be.)
If we’re looking for an error-correcting system, we need to look for a vast number of weak epistemological principles, on the level of “if event X is followed almost immediately by event Y, guess that X is generally followed almost immediately by Y”, along with perceptual details of “how long?” and “what should count as an event?”.
They would probably be fiercely embodied, but that’s not actually a problem—we’re fiercely emphysicsed, after all.
Beyond a certain point, the “regenerate if deleted?” metric becomes useless. For example, if your entire source code is “0″, well, everything’s been deleted, but there’s no way it’s growing back. There has to be somewhere to start. (Related: Where recursive justification hits bottom)
Still, you can characterize epistemic states by how much they could recover, from how deep a deletion, which was one point of the Truly Part of You article. I can imagine simpler epistemic states, lacking knowledge of the scientific method, that could recover Bayesian rationality: you would need to recognize that primitive-future has dynamics very close to primitive-past (where primitive-X denotes the inborn, intuitive understanding of X), which gives you induction, and, combined with basic numeracy, could point you in the right direction.
That was my main problem with the definition of stage 3 and was why I posted my original comment. It seemed to me that you could apply stage 3 to parts of your knowledge but not for everything.
When I read ‘This stage should be the goal of all rationalists.’ (in the original post) I was confused because it seemed to me that stage 3 was unreachable. I mean, if I started with only my human psychology, my senses and the world around me (i.e. the level of a caveman) I don’t think I would invent math, physics,… Stage 3 seemed reachable if I assumed infinite time & persistence and scientific reasoning.
I don’t know about deducing the entire mindset & toolbox of ‘Bayesian rationality,’ but knowing Bayes’ theorem is the key part of it, and I wouldn’t expect that to be too hard to reconstruct if you knew what you look to for.
Bayes’ theorem follows trivially from the definition of conditional probability, and that definition is itself quite intuitive. So in theory, once you have a feel for what probability is, it’d be quite possible to get to Bayes’ theorem. I haven’t read Huygens’ 1657 book on probability theory, but if it was any good, I bet Huygens knew enough about it to beat Bayes to Bayes’ theorem by a century.
Chapters 1 and 2 of Jaynes’ Probability Theory: The Logic of Science show how Bayes’ theorem follows necessarily from certain basic principles of plausible reasoning. In some sense all roads lead to Bayes when trying to derive a consistent mathematical procedure for manipulating degrees of plausibility.
You are quite right. I thought about mentioning the Cox-Jaynes road to Bayes’ theorem in my post, but decided that someone trying to reconstruct Bayes’ theorem would be more likely to get to it by muddling through intuitively via conditional probability.
This works for things such history, philosophy, biology? If yes, do you have some examples?
This brings my understanding of understanding to level 3.
… they said to the teacher, in hopes of finding the password.