I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.
I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it’s getting worse per unit person.
For the absolute level, I’ve noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I’m too sleepy to go into detail right now, but briefly:
There’s no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it’s solvable.
There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google’s PageRank, allowing it to identify crucial sites. Ecologists didn’t apply it to the problem of identifying critical ecosystem species until a few years ago.
I’ve gone into grad school myself and found that existing explanations of concepts is a scattered mess: it’s almost like they don’t want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article “Explain yourself!”)
As an example of what a mess it is (and at risk of provoking emotions that aren’t relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren’t as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.
Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, “Um, anyone who’s got the links to the public data, it’d be nice if you could post them here...”
To clarify, I’m NOT trying to raise the issue about AGW being a scam, etc. I’m saying that no matter how good the science is, here we have a case where it’s of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.
If the quality of science in general has not increased with more people, it’s getting worse per unit person.
Er, I’d just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s “science”), and thereby seriously influencing it. Further, that’s not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.
All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:
Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists’ individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world’s useful knowledge, or satisfy curiosity[*].
This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis—a least publishable unit is very nearly the author’s research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.
Since these explanations are scattered and confusing, it’s brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren’t likely to catch it, because it’s hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.
All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.
[*] This demands substantiation, but I have no studies to point to. It’s common knowledge, perhaps, and it’s true in the research environments I’ve found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?
It’s been broken forever, in basically the same way it is now...
the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants.
No, these are recent developments (though the stuff from your first post may be old). For the first 300 years, scientists were amateurs without grants and no one cared about quantity. For evidence of recent changes, look at the age of NIH PIs
At the conclusion of the interview, Pierre deduces one general lesson : “You can’t be inhibited, you must free yourself of the psychological obstacle that consists in being tied to something.” Oh no, our friend Pierre is not inhibited ; look how for the past twenty years he has jumped from subject to subject, from boss to boss, from country to country, bringing into action all the differences of potential, seizing polypeptides, selling them off as soon as they begin declining, betting on Monod and then dropping him as soon as he gets bogged down; and here he is, ready to pack his bags again for the West Coast, the title of professor, and a new laboratory. What thing is he accumulating ? Nothing in particular, except perhaps the absence of inhibition, a sort of free energy prepared to invest itself anywhere. Yes, this is certainly he, the Don Juan of knowledge. One will speak of “intellectual curiosity,” a “thirst for truth,” but the absence of inhibition in fact designates something else : a capital of elements without use value, which can assume any value at all, provided the cycle closes back on itself while always expanding further. Pierre Kernowicz capitalizes the jokers of knowledge.
I think you’ve got an example of generalizing from one example, and perhaps the habit of thinking of oneself as typical—you’re unusually good at finding clear explanations, and you think that other people could be about as good if they’d just try a little.
I suspect they’d have to try a lot.
As far as I can tell, most people find it very hard to imagine what it’s like to not understand knowledge they’ve assimilated, which is another example of the same mistake.
Well, I appreciate the compliment, but keep in mind you haven’t personally put me to the test on my claim to have that skill at explaining.
As far as I can tell, most people find it very hard to imagine what it’s like to not understand knowledge they’ve assimilated, which is another example of the same mistake.
But I don’t understand why this would be hard—people make quite a big deal about how “I was little boy/girl like you too one time”. Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.
(I remember one of my professors, later my grad school advisor (bless his heart), was a master at explaining and achieving Level 2 understanding on topics. He was always able to connect it back to related topics, and if students had trouble understanding something, he was always able to identify what the knowledge deficit was and jump in with an explanation of the background info needed.)
To the extent that your assessment is accurate, this problem people have can still be corrected by relatively simple changes in practice. For example, instead of just learning the next class up and moving on, people could make a habit of checking for how it connects to the previous class’s knowledge, to related topics, to introductory class knowledge, and to layperson knowledge. It wouldn’t help current people, as you have to make it an ongoing effort, but it doesn’t sound like it’s hard.
Also, is it really that hard for people to ask themselves, “Assume I know nothing. What would I have to be told to be able to do this?”
Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.
I remember that it was all pretty straightforward and intuitive. This was not a typical experience, and it also means that I don’t really know what average students have trouble with in basic Newtonian physics. Physics professors tend to be people who were unusually good at introductory physics classes. (Meanwhile, I can’t seem to find an explanation of standard social skills that doesn’t assume a lot of intuitions that I find non-obvious. Fucking small talk, how does it work?!)
Most professors weren’t typical students, so why would their recollections be a good guide to what problems typical students have when learning a subject for the first time?
I remember intro physics being straightforward and intuitive, and I had no trouble explaining it to others. In fact, the first day we had a substitute teacher who just told us to read the first chapter, which was just the basics like scientific notation, algebraic manipulation, unit conversion, etc. I ended up just teaching the others when something didn’t make sense.
If there was any pattern to it, it was that I was always able to “drop back a level” to any grounding concept. “Wait, do you understand why dividing a variable by itself cancels it out?” “Do you understand what multiplying by a power of 10 does?”
That is, I could trace back to the beginning of what they found confusing. I don’t think I was special in having this ability—it’s just something people don’t bother to do, or don’t themselves possess the understanding to do, whether it’s teaching physics or social skills (for which I have the same complaint as you).
Someone who really understands sociality (i.e., level 2, as mentioned above) can fall back to the questions of why people engage in small talk, and what kind of mentality you should have when doing so. But most people either don’t bother to do this, or have only an automatic (level 1) understanding.
Do you ever have trouble explaining physics to others? Do you find any commonality to the barriers you encounter?
In mathy fields, how much of it is caused by insufficiently deep understanding and how much of it is caused by taboos against explicitly discussing intuitive ways of thinking that can’t be defended as hard results? The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you’re learning, and to build intuitions you have to spend a lot of time doing exercises. But I’ve always thought such effort could be partly avoided if instead of playing dignified Zen master, textbooks were full of low-status sentences like “a prisoner’s dilemma means two parties both have the opportunity to help the other at a cost that’s smaller than the benefit, so it’s basically the same thing as trade, where both parties give each other stuff that they value less than the other, so you should imagine trade as people lobbing balls of stuff at each other that grow in the lobbing, and if you zoom out it’s like little fountains of stuff coming from nowhere”. (ETA: I mean in addition to the math itself, of course.) It’s possible that I’m overrating how much such intuitions can be shared between people, maybe because of learning-style issues.
I think you’ve got something really important here. If you want to get someone to an intuitive understanding of something, then why not go with explanations that are closer to that intuitive understanding? I usually understand such explanations a lot better than more dignified explanations, and I’ve seen that a lot of other people are the same way.
I remember when a classmate of mine was having trouble understanding mutexes, semaphores, monitors, and a few other low-level concurrency primitives. He had been to the lectures, read the textbook, looked it up online, and was still baffled. I described to him a restroom where people use a pot full of magic rocks to decide who can use the toilets, so they don’t accidentally pee on each other. The various concurrency primitives were all explained as funny rituals for getting the magic toilet permission rocks. E.g. in one scheme people waiting for a rock stand in line; in another scheme they stand in a throng with their eyes closed, periodically flinging themselves at the pot of rocks to see if any are free. Upon hearing this, my friend’s confusion was dispelled. (For my part, I didn’t understand this stuff until I had translated it into vague images not too far removed from the stupid bathroom story I told my friend. The textbook explanations are just bad sometimes.)
Or for another example, I had terrible trouble with basic probability theory until I learned to imagine sets of things that could happen, and visualize them as these hazy blob things. Once that happened, it was as if my eyes had finally opened, and everything became clear. I was kind of pissed off that all the classes I’d been in that tried to teach probability focused exclusively on the equations, so I’d had to figure out the intuitive stuff without any help.
As a side-note, this is one reason why I’m optimistic about online education like Salman Khan’s videos. It’s not that they’re inherently better, obviously, but they have the potential for much more competition. I can imagine students in The Future comparing lecturers, with the underlying assumption that you can trivially switch at any time. “Oh, you’re trying to learn about the ancient Roman sumptuary laws from Danrich Parrol’s lectures? Those are pretty mind-numbing; try Nile Etland’s explanations instead. She presents the different points of view by arguing vehemently with herself in several funny accents. It’s surprisingly clear, even if she does sound like a total nutcase.”
[Side-note to the side-note: I think more things should be explained as arguments. And the natural way to do this is for one person to hold a crazy multiple-personality argument-monologue. This also works for explaining digital hardware design as a bunch of components having a conversation. “You there! I have sent you a 32-bit integer! Tell me when you’re done with it!” Works like a charm.]
Man, the future of education will be silly. And more educational!
Man, the future of education will be silly. And more educational!
It wouldn’t surprise me if a big part of the problem now is the assumption that there’s virtue to enduring boredom, and a proof of status if you impose it.
It wouldn’t surprise me if a big part of the problem now is the assumption that there’s virtue to enduring boredom
If by boredom you mean dominance and inequality, then Robin Hanson has been riffing on this theme lately. The main idea is that employers need employees who will just accept what they’re told do instead of rebelling and trying to form a new tribe in a nearby section of savannah. School trains some of the rebelliousness out of students. See e.g., this, this, and this.
No, by boredom I mean lack of appropriate levels of stimulus, and possibly lack of significant work.
Dominance and inequality can play out in a number of ways, including chaos (imagine a badly run business with employees who would like things to be more coherent), physical abuse, and deprivation. Imposed boredom is only one possibility.
Causing people to have, or feel they have, no alternatives is how abusive authorities get away with it.
Heh, this reminds me of this discussion of Plain Talk on a wiki I participated in years ago. I must have drawn those little characters, what, ten years ago? Not quite (more like six or seven), but it feels like ages ago.
I agree with this. It is also true that people’s intuitions differ, and people respond differently to different kinds of informal explanation. steven0461′s explanation of Prisoner’s Dilemma would be good for someone accustomed to thinking visually, for example. For this reason, your vision of individual explanations competing (or cooperating) is important.
One of the things I’ve always disliked about mathematical culture is this taboo against making allowances for human weakness on the part of students (of any age.) For example, the reluctance to use “plain English” aids to intuition, or pictures, or motivations. Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don’t need such crutches. But it seems to get in the way of effective communication.
You can go too far in the other direction—I’ve found that it can also be hard to learn when there’s too little rigorous formalism. (Most recently I’ve had that experience with electrical engineering and philosophy.) There ought to be a happy medium somewhere.
Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don’t need such crutches.
This isn’t really a signaling issue so much as a response to the fact that mathematicians have had centuries of experience where apparent theorems turned out to be not proven or not even true and the failings were due to too much reliance on intuition. Classical examples of this include how in the 19th century there was about a decade long period where people thought that the Four Color Theorem was proven. Also, a lot of these sorts of issues happened in calculus before it was put on a rigorous setting in the 1850s.
There may be a signaling aspect but it is likely a small one. I’d expect more likely that mathematicians err on the side of rigor.
ETA: Another data point that suggests this isn’t about signaling; I’ve been too a fair number of talks in which people in the audience get annoyed because they think there’s too much formalism hiding some basic idea in which case they’ll ask questions sometimes of the form “what’s the idea behind the proof” or “what’s the moral of this result?”
Just to be clear: I’m not against rigor. Rigor is there for a good reason.
But I do think that there’s a bias in math against making it easy to learn. It’s weird.
Math departments, anecdotally in nearly all the colleges I’ve heard of, are terrible at administrative conveniences. Math will be the only department that doesn’t put lecture notes online, won’t announce the correct textbook for the course, won’t produce a syllabus, won’t announce the date of the final exam. Physics, computer science, etc., don’t do this to their students. This has nothing to do with rigor; I think it springs from the assumption that such details are trivial.
I’ve noticed a sort of aesthetic bias (at least in pure math) against pictures and “selling points.” I recall one talk where the speaker emphasized how transformative his result could be for physics—it was a very dramatic lecture. The gossip afterwards was all about how arrogant and salesman-like the speaker was. That cultural instinct—to disdain flash and drama—probably helps with rigorous habits of thought, but it ruins our chances to draw in young people and laymen. And I think it can even interfere with comprehension (people can easily miss the understated.)
Over 99% of students learning math aren’t going to be expected to contribute to cutting-edge proofs, so I don’t regard this as a good reason not to use “plain English” methods.
In any case, a plain English understanding can allow you to bootstrap to a rigorous understanding, so more hardcore mathematicians should be able to overcome any problem introduced this way.
I agree that this is likely often suboptimal when teaching math. The argument I was presenting was that this approach was not due to signaling. I’m not arguing that this is at all optimal.
I don’t think this problem is limited to math: it’s present in all cutting-edge or graduate school levels of technical subjects. Basically, if you make your work easily accessible to a lay audience[1], it’s regarded as lower status or less significant. (“Hey, if it sounds so simple, it must not have been very hard to get!”)
And ironically enough, this thread sprung from me complaining about exactly that (see esp. the third bullet point).
[1] And contrary to what turf-defenders like to claim, this isn’t that hard. Worst case, you can just add a brief pointer to an introduction to the topic and terminology. To borrow from some open source guy, “Given enough artificial barriers to understanding, all bugs are deep.”
The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you’re learning
I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for. And, I thought everyone agreed that lectures work much better.
I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for.
I think you’re right, but only to a limited (varying) degree. I also think it’s not just a matter of being informal, but a matter of just stating explicitly a lot of insights that you’re “supposed” to get only through hard mental labor.
I don’t have an answer, but I can attest to not mimicking a textbook when I try to explain high school math to someone. Rather, I first find out where gap is between their understanding and where I want them to be.
Of course, textbooks don’t have the luxury of probing each student’s mind.
That is, I could trace back to the beginning of what they found confusing. I don’t think I was special in having this ability—it’s just something people don’t bother to do, or don’t themselves possess the understanding to do, whether it’s teaching physics or social skills (for which I have the same complaint as you).
This demonstrates a highly developed theory of mind. In order to do this one needs to both have a good command of material and a good understanding of what people are likely to understand or not understand. This is often very difficult.
I thought I should add a pointer one of the replies, because it’s another anecdote from when poster noticed the difference (in what “understand” means) on an encounter with another person who had a lower threshold.
Maybe there is a wide variance in “understanding criteria” or “curiosity shut-off point” which has real importance for how people learn.
Maybe so, but then this would be the only area where I have a highly-developed theory of mind. If you’ll ask the people who have seen me post for a while, the consensus is that this is where I’m most lacking. They don’t typically put it in terms of a theory of mind, but one complaint about me can be expressed as, “he doesn’t adequately anticipate how others will react to what he does”—which amounts to the saying I lack a good theory of mind (which is a common characteristic of autistics).
But that gives me an idea: maybe what’s unique about me is what I count as a genuine understanding. I don’t regard myself as understanding the material until I have “plugged it in” to the rest of my knowledge, so I’ve made a habit of ensuring that what I know in one area is well-connected to other areas, especially its grounding concepts. I can’t, in other words, compartmentalize subjects as easily.
(That would also explain what I hated about literature and, to a lesser extent, history—I didn’t see what they were building off of.)
Yes, I had that thought also but wasn’t sure how to put it. Frankly, I’m a bit surprised that you had that good a theory of mind for physics issues. Your hypothesis about plugging in seems plausible.
Also, it looks like EY already wrote an article about the phenomenon I described: when people learn something in school, they normally don’t bother to ground it like I’ve described, and so don’t know what a true (i.e., level 2) understanding looks like.
For me, a small but significant hack suggested by Anna Salamon was to try to act (and later, to actually be) cheerful and engaged instead of wittily laconic and ‘intelligent’. That said, it’s rare that I remember to even try. Picking up habits is difficult.
I mainly have evidence for the absolute level, not necessary for the trend (in science getting worse). For the trend, I could point to Goodhart phenomena like having to rely on the publication per unit time metric being gamed, and getting worse as time progresses.
I also think that in this context, the absolute level is evidence of the trend, when you consider that the number of scientists has increased; if the quality of science in general has not increased with more people, it’s getting worse per unit person.
For the absolute level, I’ve noticed scattered pieces of the puzzle that, against my previous strong presumption, support my suspicions. I’m too sleepy to go into detail right now, but briefly:
There’s no way that all the different problems being attacked by researchers can be really, fundamentally different: the functionspace is too small for a unique one to exist for each problem, so most should be reducible to a mathematical formalism that can be passed to mathematicians who can tell if it’s solvable.
There is evidence that such connections are not being made. The example I use frequently is ecologists and the method of adjacency matrix eigenvectors. That method has been around since the 1960s and forms the basis of Google’s PageRank, allowing it to identify crucial sites. Ecologists didn’t apply it to the problem of identifying critical ecosystem species until a few years ago.
I’ve gone into grad school myself and found that existing explanations of concepts is a scattered mess: it’s almost like they don’t want you to understand papers or break into advanced topics that are the subject of research. Whenever I understand such a topic, I find myself able to explain it in much shorter time than experts in the field in explained it to me. This creates a fog over research, allowing big mistakes to last for years, with no one ever noticing it because too few eyeballs are on it. (This explanation barrier is the topic of my ever-upcoming article “Explain yourself!”)
As an example of what a mess it is (and at risk of provoking emotions that aren’t relevant to my point), consider climate science. This is an issue where they have to convince LOTS of people, most of whom aren’t as smart. You would think that in documenting the evidence supporting their case, scientists would establish a solid walkthrough: a runnable, editable model with every assumption traceable to its source and all inputs traceable to the appropriate databases.
Yet when climate scientists were in the hot seat last fall and wanted to reaffirm the strength of their case, they had no such site to point anyone to. RealClimate.org made a post saying basically, “Um, anyone who’s got the links to the public data, it’d be nice if you could post them here...”
To clarify, I’m NOT trying to raise the issue about AGW being a scam, etc. I’m saying that no matter how good the science is, here we have a case where it’s of utmost important to explain research to the masses, and so it would have the most thorough documentation and traceability. Yet here, at the top of the hill, no one bothered to trace out the case from start to finish, fully connecting this domain to the rest of collective scientific knowledge.
Er, I’d just expect to see more science being done. I know of no one studying overall mechanisms of science-as-it-is-realized (little-s “science”), and thereby seriously influencing it. Further, that’s not something current science is likely to worry about, unless someone can somehow point to irrefutable evidence that science is underperforming.
All of the points you list are real issues; I watch them myself, to constant frustration. I think they have common cause in the incentive structure of science. The following account has been hinted at many times over around Less Wrong, but spelling it out may make it clear how your points follow:
Researchers focus on churning out papers that can actually get accepted at some highly-rated journal or conference, because the quantity of such papers are seen as the main guarantor of being hired as a faculty, making tenure, and getting research grants. This quantity has a strong effect on scientists’ individual futures and their reputations. For all but the most well-established or idealistic scientists, this pressure overrides the drive to promote general understanding, increase the world’s useful knowledge, or satisfy curiosity[*].
This pressure means that scientists seek the next publication and structure their investigations to yield multiple papers, rather than telling a single coherent story from what might be several least publishable units. Thus, you should expect little synthesis—a least publishable unit is very nearly the author’s research minus the current state of knowledge in a specialized subfield. Thus, as you say, existing explanations are a scattered mess.
Since these explanations are scattered and confusing, it’s brutally difficult to understand the cutting edge of any particular subfield. Following publication pressure, papers are engineered to garner acceptance from peer reviewers. Those reviewers are part of the same specialized subfield as the author. Thus, if the author fails to use a widely-known concept from outside his subfield to solve a problem in his paper, the reviewers aren’t likely to catch it, because it’s hard to learn new ideas from other subfields. Thus, the author has no real motivation to investigate subfields outside of his own expertise, and we have a stable situation. Thus, your first and second points.
All this suggests to me that, if we want to make science better, we need to somehow twiddle its incentive structure. But changing longstanding organizational and social trends is, er, outside of my subfield of study.
[*] This demands substantiation, but I have no studies to point to. It’s common knowledge, perhaps, and it’s true in the research environments I’ve found myself in. Does it ring true for everyone else reading this, with appropriate experience of academic research?
No, these are recent developments (though the stuff from your first post may be old). For the first 300 years, scientists were amateurs without grants and no one cared about quantity. For evidence of recent changes, look at the age of NIH PIs
-- Bruno Latour, Portait of a Biologist as Wild Capitalist
(ETA: see also.)
I think you’ve got an example of generalizing from one example, and perhaps the habit of thinking of oneself as typical—you’re unusually good at finding clear explanations, and you think that other people could be about as good if they’d just try a little.
I suspect they’d have to try a lot.
As far as I can tell, most people find it very hard to imagine what it’s like to not understand knowledge they’ve assimilated, which is another example of the same mistake.
Well, I appreciate the compliment, but keep in mind you haven’t personally put me to the test on my claim to have that skill at explaining.
But I don’t understand why this would be hard—people make quite a big deal about how “I was little boy/girl like you too one time”. Certainly a physics professor would generally remember what it was like to take their first physics class, what confused them, what way of thinking made it clearer, etc.
(I remember one of my professors, later my grad school advisor (bless his heart), was a master at explaining and achieving Level 2 understanding on topics. He was always able to connect it back to related topics, and if students had trouble understanding something, he was always able to identify what the knowledge deficit was and jump in with an explanation of the background info needed.)
To the extent that your assessment is accurate, this problem people have can still be corrected by relatively simple changes in practice. For example, instead of just learning the next class up and moving on, people could make a habit of checking for how it connects to the previous class’s knowledge, to related topics, to introductory class knowledge, and to layperson knowledge. It wouldn’t help current people, as you have to make it an ongoing effort, but it doesn’t sound like it’s hard.
Also, is it really that hard for people to ask themselves, “Assume I know nothing. What would I have to be told to be able to do this?”
I remember that it was all pretty straightforward and intuitive. This was not a typical experience, and it also means that I don’t really know what average students have trouble with in basic Newtonian physics. Physics professors tend to be people who were unusually good at introductory physics classes. (Meanwhile, I can’t seem to find an explanation of standard social skills that doesn’t assume a lot of intuitions that I find non-obvious. Fucking small talk, how does it work?!)
Most professors weren’t typical students, so why would their recollections be a good guide to what problems typical students have when learning a subject for the first time?
I remember intro physics being straightforward and intuitive, and I had no trouble explaining it to others. In fact, the first day we had a substitute teacher who just told us to read the first chapter, which was just the basics like scientific notation, algebraic manipulation, unit conversion, etc. I ended up just teaching the others when something didn’t make sense.
If there was any pattern to it, it was that I was always able to “drop back a level” to any grounding concept. “Wait, do you understand why dividing a variable by itself cancels it out?” “Do you understand what multiplying by a power of 10 does?”
That is, I could trace back to the beginning of what they found confusing. I don’t think I was special in having this ability—it’s just something people don’t bother to do, or don’t themselves possess the understanding to do, whether it’s teaching physics or social skills (for which I have the same complaint as you).
Someone who really understands sociality (i.e., level 2, as mentioned above) can fall back to the questions of why people engage in small talk, and what kind of mentality you should have when doing so. But most people either don’t bother to do this, or have only an automatic (level 1) understanding.
Do you ever have trouble explaining physics to others? Do you find any commonality to the barriers you encounter?
In mathy fields, how much of it is caused by insufficiently deep understanding and how much of it is caused by taboos against explicitly discussing intuitive ways of thinking that can’t be defended as hard results? The common view seems to be that textbooks/lectures are for showing the formal structure of whatever it is you’re learning, and to build intuitions you have to spend a lot of time doing exercises. But I’ve always thought such effort could be partly avoided if instead of playing dignified Zen master, textbooks were full of low-status sentences like “a prisoner’s dilemma means two parties both have the opportunity to help the other at a cost that’s smaller than the benefit, so it’s basically the same thing as trade, where both parties give each other stuff that they value less than the other, so you should imagine trade as people lobbing balls of stuff at each other that grow in the lobbing, and if you zoom out it’s like little fountains of stuff coming from nowhere”. (ETA: I mean in addition to the math itself, of course.) It’s possible that I’m overrating how much such intuitions can be shared between people, maybe because of learning-style issues.
I think you’ve got something really important here. If you want to get someone to an intuitive understanding of something, then why not go with explanations that are closer to that intuitive understanding? I usually understand such explanations a lot better than more dignified explanations, and I’ve seen that a lot of other people are the same way.
I remember when a classmate of mine was having trouble understanding mutexes, semaphores, monitors, and a few other low-level concurrency primitives. He had been to the lectures, read the textbook, looked it up online, and was still baffled. I described to him a restroom where people use a pot full of magic rocks to decide who can use the toilets, so they don’t accidentally pee on each other. The various concurrency primitives were all explained as funny rituals for getting the magic toilet permission rocks. E.g. in one scheme people waiting for a rock stand in line; in another scheme they stand in a throng with their eyes closed, periodically flinging themselves at the pot of rocks to see if any are free. Upon hearing this, my friend’s confusion was dispelled. (For my part, I didn’t understand this stuff until I had translated it into vague images not too far removed from the stupid bathroom story I told my friend. The textbook explanations are just bad sometimes.)
Or for another example, I had terrible trouble with basic probability theory until I learned to imagine sets of things that could happen, and visualize them as these hazy blob things. Once that happened, it was as if my eyes had finally opened, and everything became clear. I was kind of pissed off that all the classes I’d been in that tried to teach probability focused exclusively on the equations, so I’d had to figure out the intuitive stuff without any help.
As a side-note, this is one reason why I’m optimistic about online education like Salman Khan’s videos. It’s not that they’re inherently better, obviously, but they have the potential for much more competition. I can imagine students in The Future comparing lecturers, with the underlying assumption that you can trivially switch at any time. “Oh, you’re trying to learn about the ancient Roman sumptuary laws from Danrich Parrol’s lectures? Those are pretty mind-numbing; try Nile Etland’s explanations instead. She presents the different points of view by arguing vehemently with herself in several funny accents. It’s surprisingly clear, even if she does sound like a total nutcase.”
[Side-note to the side-note: I think more things should be explained as arguments. And the natural way to do this is for one person to hold a crazy multiple-personality argument-monologue. This also works for explaining digital hardware design as a bunch of components having a conversation. “You there! I have sent you a 32-bit integer! Tell me when you’re done with it!” Works like a charm.]
Man, the future of education will be silly. And more educational!
It wouldn’t surprise me if a big part of the problem now is the assumption that there’s virtue to enduring boredom, and a proof of status if you impose it.
If by boredom you mean dominance and inequality, then Robin Hanson has been riffing on this theme lately. The main idea is that employers need employees who will just accept what they’re told do instead of rebelling and trying to form a new tribe in a nearby section of savannah. School trains some of the rebelliousness out of students. See e.g., this, this, and this.
No, by boredom I mean lack of appropriate levels of stimulus, and possibly lack of significant work.
Dominance and inequality can play out in a number of ways, including chaos (imagine a badly run business with employees who would like things to be more coherent), physical abuse, and deprivation. Imposed boredom is only one possibility.
Causing people to have, or feel they have, no alternatives is how abusive authorities get away with it.
That sounds like such fun!
It’s every bit as fun as you imagine. And it works great.
Heh, this reminds me of this discussion of Plain Talk on a wiki I participated in years ago. I must have drawn those little characters, what, ten years ago? Not quite (more like six or seven), but it feels like ages ago.
I agree with this. It is also true that people’s intuitions differ, and people respond differently to different kinds of informal explanation. steven0461′s explanation of Prisoner’s Dilemma would be good for someone accustomed to thinking visually, for example. For this reason, your vision of individual explanations competing (or cooperating) is important.
One of the things I’ve always disliked about mathematical culture is this taboo against making allowances for human weakness on the part of students (of any age.) For example, the reluctance to use “plain English” aids to intuition, or pictures, or motivations. Sometimes I almost think this is a signaling issue, where mathematicians want to display that they don’t need such crutches. But it seems to get in the way of effective communication.
You can go too far in the other direction—I’ve found that it can also be hard to learn when there’s too little rigorous formalism. (Most recently I’ve had that experience with electrical engineering and philosophy.) There ought to be a happy medium somewhere.
This isn’t really a signaling issue so much as a response to the fact that mathematicians have had centuries of experience where apparent theorems turned out to be not proven or not even true and the failings were due to too much reliance on intuition. Classical examples of this include how in the 19th century there was about a decade long period where people thought that the Four Color Theorem was proven. Also, a lot of these sorts of issues happened in calculus before it was put on a rigorous setting in the 1850s.
There may be a signaling aspect but it is likely a small one. I’d expect more likely that mathematicians err on the side of rigor.
ETA: Another data point that suggests this isn’t about signaling; I’ve been too a fair number of talks in which people in the audience get annoyed because they think there’s too much formalism hiding some basic idea in which case they’ll ask questions sometimes of the form “what’s the idea behind the proof” or “what’s the moral of this result?”
Just to be clear: I’m not against rigor. Rigor is there for a good reason.
But I do think that there’s a bias in math against making it easy to learn. It’s weird.
Math departments, anecdotally in nearly all the colleges I’ve heard of, are terrible at administrative conveniences. Math will be the only department that doesn’t put lecture notes online, won’t announce the correct textbook for the course, won’t produce a syllabus, won’t announce the date of the final exam. Physics, computer science, etc., don’t do this to their students. This has nothing to do with rigor; I think it springs from the assumption that such details are trivial.
I’ve noticed a sort of aesthetic bias (at least in pure math) against pictures and “selling points.” I recall one talk where the speaker emphasized how transformative his result could be for physics—it was a very dramatic lecture. The gossip afterwards was all about how arrogant and salesman-like the speaker was. That cultural instinct—to disdain flash and drama—probably helps with rigorous habits of thought, but it ruins our chances to draw in young people and laymen. And I think it can even interfere with comprehension (people can easily miss the understated.)
Over 99% of students learning math aren’t going to be expected to contribute to cutting-edge proofs, so I don’t regard this as a good reason not to use “plain English” methods.
In any case, a plain English understanding can allow you to bootstrap to a rigorous understanding, so more hardcore mathematicians should be able to overcome any problem introduced this way.
I agree that this is likely often suboptimal when teaching math. The argument I was presenting was that this approach was not due to signaling. I’m not arguing that this is at all optimal.
I don’t think this problem is limited to math: it’s present in all cutting-edge or graduate school levels of technical subjects. Basically, if you make your work easily accessible to a lay audience[1], it’s regarded as lower status or less significant. (“Hey, if it sounds so simple, it must not have been very hard to get!”)
And ironically enough, this thread sprung from me complaining about exactly that (see esp. the third bullet point).
[1] And contrary to what turf-defenders like to claim, this isn’t that hard. Worst case, you can just add a brief pointer to an introduction to the topic and terminology. To borrow from some open source guy, “Given enough artificial barriers to understanding, all bugs are deep.”
I thought that writing was for that and lectures were supposed to be informal, the kind of thing you were asking for. And, I thought everyone agreed that lectures work much better.
I think you’re right, but only to a limited (varying) degree. I also think it’s not just a matter of being informal, but a matter of just stating explicitly a lot of insights that you’re “supposed” to get only through hard mental labor.
I don’t have an answer, but I can attest to not mimicking a textbook when I try to explain high school math to someone. Rather, I first find out where gap is between their understanding and where I want them to be.
Of course, textbooks don’t have the luxury of probing each student’s mind.
This demonstrates a highly developed theory of mind. In order to do this one needs to both have a good command of material and a good understanding of what people are likely to understand or not understand. This is often very difficult.
I thought I should add a pointer one of the replies, because it’s another anecdote from when poster noticed the difference (in what “understand” means) on an encounter with another person who had a lower threshold.
Maybe there is a wide variance in “understanding criteria” or “curiosity shut-off point” which has real importance for how people learn.
Maybe so, but then this would be the only area where I have a highly-developed theory of mind. If you’ll ask the people who have seen me post for a while, the consensus is that this is where I’m most lacking. They don’t typically put it in terms of a theory of mind, but one complaint about me can be expressed as, “he doesn’t adequately anticipate how others will react to what he does”—which amounts to the saying I lack a good theory of mind (which is a common characteristic of autistics).
But that gives me an idea: maybe what’s unique about me is what I count as a genuine understanding. I don’t regard myself as understanding the material until I have “plugged it in” to the rest of my knowledge, so I’ve made a habit of ensuring that what I know in one area is well-connected to other areas, especially its grounding concepts. I can’t, in other words, compartmentalize subjects as easily.
(That would also explain what I hated about literature and, to a lesser extent, history—I didn’t see what they were building off of.)
Yes, I had that thought also but wasn’t sure how to put it. Frankly, I’m a bit surprised that you had that good a theory of mind for physics issues. Your hypothesis about plugging in seems plausible.
Also, it looks like EY already wrote an article about the phenomenon I described: when people learn something in school, they normally don’t bother to ground it like I’ve described, and so don’t know what a true (i.e., level 2) understanding looks like.
(Sorry to keep replying to this comment!)
Don’t let that stop you from writing about related topics.
For me, a small but significant hack suggested by Anna Salamon was to try to act (and later, to actually be) cheerful and engaged instead of wittily laconic and ‘intelligent’. That said, it’s rare that I remember to even try. Picking up habits is difficult.