How to get people to produce more great exposition? Some strategies and their assumptions
Some of the recent posts in the distillation & pedagogy tag here on LessWrong, such as “Call For Distillers” and the AI Safety Distillation Contest, have been bugging me, and this post is my attempt to introspect about why.
Here are four strategies for achieving the goal of creating more great expository pieces:[1]
Encourage people to produce more expository pieces and let people know that producing explanations is a valid path. In my interpretation, the “Call For Distillers” post mainly uses this strategy.
Give people money in exchange for producing explanations. The AI Safety Distillation Contest is maybe half about this strategy and half about the previous strategy.
Get people who are already good at producing explanations to mentor other people. I think the Distill Research Journal is mainly using this strategy.
Treat exposition as a scientific field of inquiry: try to open up the black box of what makes an explanation good and come up with techniques or “building blocks” of good explanations. Tim Gowers is the best example I know of, with his blog posts, YouTube videos, and the Tricki.
Each of these strategies comes with a set of assumptions that makes it the best strategy for achieving the goal:
Encouragement is a good strategy if the main problem is that people don’t know that exposition is a thing they can do. But if used as the main strategy, it also has the assumption that many people already have the skill of producing great exposition, or that they will be able to learn it on their own.
Throwing money at the problem is a good strategy if people lack the time or resources to spend on creating explanations. As with the encouragement strategy, if this is the main strategy one employs then there is an assumption that many people are already good at producing exposition or can learn on their own. Alternatively, even if few people are great at exposition, throwing money can still look good as long as the funders can identify the few great expositors.
Mentorship is a good strategy if most people don’t know how to produce great explanations and it is difficult to learn how to do this on one’s own, but many people want to and have the capacity to learn the skill given help.
“Exposition as science” is a good strategy if no one really knows how to produce great explanations. The less people know about how to produce great explanations, or the fewer the number of people who are capable of producing great explanations (but still assuming that eventually many people can learn this skill),[2] the better this strategy looks.
Going back to what has been bugging me about the recent posts, here’s my attempt to articulate it: the strategies these posts employ reveal assumptions they have, and I think those assumptions are wrong. In particular, I believe that exposition is a skill for which taste matters a lot, that basically nobody knows how to produce good explanations at all (and even those who can sometimes produce good explanations cannot do so reliably or can only do so with great effort), and that it is difficult to learn the skill on one’s own. This makes me worried about the encouragement and “throw money” strategies being the main strategies: both are important, but if we don’t also employ other strategies, then we will end up with a bunch of mediocre explanations.[3]
In addition to using encouragement and “throw money” as “side” strategies, I am also in favor of mentorship (also as a “side” strategy). One reason is that mentorship is realistic about the skills/taste aspect of producing great explanations. But my worry with mentorship is that if it is used as the main strategy then it is not very scalable, and also mentorship tends to keep insights about what makes explanations good boxed up (even to themselves) and inaccessible to others outside a small circle (see “Do Scientists Already Know This Stuff?” and “Unteachable Excellence” for some similar points). In other words, mentorship is more illegible and elitist than I would like.
This leaves the “exposition as science” strategy, which I believe is the best main strategy to use, along with the other three strategies. I have not said much concretely about what kinds of work this strategy would involve. I hope to do this in a future post.
Acknowledgments: Thanks to Justis Mills for giving substantive feedback on a draft of this post (as part of LessWrong’s “get feedback” feature). Thanks also to Vipul Naik for reviewing this post.
- ^
This is not intended to be an exhaustive list. Also these strategies are not mutually exclusive. However I will talk about people employing some strategy as the “main” strategy.
- ^
My guess is that “ability to write good explanations” works kind of like “ability to write mathematical proofs”. Even bright high school students write terrible math proofs, but they can attend a proof-writing (or other undergraduate math) course or work through a few math textbooks to reliably acquire this skill. Right now bright high school students also mostly write terrible explanations, and there is no straightforward path to acquiring the ability to write good explanations, but eventually I think such a path can be created.
- ^
I hope to explain my reasoning for the points made in this paragraph in one or more future posts.
As a criticism of the Call for Distillers, I endorse this post and broadly agree with it. The call for distillers was worth writing because it was relatively cheap for me to write, but I do not think it addressed the primary bottleneck to creating more good distillations.
One thing I think this post didn’t highlight quite enough:
We should expect identifying great expositors to be unusually easy, compared to the difficulty of identifying great work in other fields. Their outputs are highly visible, and should be legible even to people with relatively less expertise in the subject matter. The main trap is presumably illusion of transparency, and that can largely be mitigated by using the supposedly-acquired understanding for something.
That said, I share the impression that no one really knows how to produce great explanations, or to the extent that they do the knowledge is illegible.
… which means distillation actually looks like unusually low-hanging fruit for progress! It’s an area with unusually easy-to-ground feedback loops, yet relatively little legible understanding of the art.
We can start by defining what we’re not talking about:
General writing advice or writing practice
Producing mediocre or unhelpful explanations
Producing thorough technical writing, such as a formal mathematical proof or a procedure for manufacturing a pharmaceutical
In general, we’re talking about something that is shorter, easier to understand, helps point to or provide necessary background, motivates the reader, leans more on the reader’s intuitions, and either decreases the net time investment or the success rate for readers in approaching the longer original text.
I think it is a combination of three skills:
deep understanding of the subject;
knowing how to teach;
the general writing/presentation skills (including how to make things fun).
If you fail at the first point, you are likely to produce very popular but misleading explanations.
If you fail at the second point, you will produce texts that all experts on the subject will agree are fantastic, and yet the beginners will fail to understand them or to make the right conclusions.
If you fail at the third point, you will produce a great but kinda boring textbook.
The good news is that the second and third points are quite general, but they need a huge dose of genuine humility to listen to the actual experts and take feedback from them (especially when you know quite well that your text would be hugely popular even if you ignored them and just used your own opinion instead).
This may actually go against our status-related instincts (which might explain the relative lack of great exposition experts). On one hand, you have to treat the subject with great respect, to carefully avoid the possible misinterpretations, no matter how tempting they might be. On the other hand, the attempt to explain the subject to general audience in the most simple way, it feels disrespectful. Our instinct tells us that great wisdom requires great sacrifices to understand… and the goal here is precisely to communicate great wisdom with preferably no sacrifice on the side of the listeners (other than a little of their time).
For some people it may be psychologically painful to accept that others may learn in one hour a nontrivial part of what took them a few years of hard work to collect. It kinda makes you feel stupid.
Thanks for the post!
Thanks for reminding me of these resources, I think I knew about all of them at some point then forgot.
Strong agree. That’s my main problem with the current push for more distillation (I mentioned the problem to John at some point). Distillation is just plain hard, and most distillation I have seen around here (including a fair bunch of mine) has been useful as a learning exercise but completely useless for most other people.
Also alignment is particularly difficult to distill and explain IMO, for epistemological reasons that I will go into in a post in the near future.
That’s very exciting to me! I personally study how science worked and failed historically and epistemic progress and vigilance in general to make alignment go faster and better, so I’ll be interested to discuss exposition as a science with you (and maybe give feedback on your follow-up posts if you want. ;) )
Cool! I just shared my draft post with you that goes into detail about the “exposition as science” strategy (ETA for everyone else: the post has now been published); if that post seems interesting to you, I’d be happy to discuss more with you (or you can just leave comments on the post if that is easier).
Thanks!
I will look at the post soonish. Sorry for the delay in answering, I was in holidays this week. ^^
Note that “Starting [July 2, 2021] Distill will be taking a one year hiatus, which may be extended indefinitely.” Full write-up here.
This is an interesting subject for me. My entire life I’ve had a weird habit of occasionally daydreaming about conversations with medieval people where I try to explain stuff in the modern world to them; and I tutored someone in high school algebra years ago who went from hating math to loving it; and just generally, I’ve been told by a few people that I’m good at explaining things. This isn’t much evidence, but it’s enough to make me consider that I might try my hand at distillation / exposition and see if I can help figure it out.
I agree that this ought to be made scientific and I’d be very interested in participating in studies or experiments to figure out how it works if you do any. Of course I ought to actually prove I’m good at exposition first.
One thing I think is very important for good exposition is taking the audience and their prior knowledge into account. If I tried to explain alignment to, for instance, a conservative Christian, I would use totally different terms and references than if I was explaining it to e.g. a neuroscientist.
EDIT: Wait. Isn’t exposition just teaching? There’s a whole theory of education that people go to college to learn and I really hope it’s not all pseudoscience. We could start with that.
For me, the thing that distinguishes exposition from teaching is that in exposition one is supposed to produce some artifact that does all the work of explaining something, whereas in teaching one is allowed to jump in and e.g. answer questions or “correct course” based on student confusion. This ability to “use a knowledgeable human” in the course of explanation makes teaching a significantly easier problem (though still a very interesting one!). It also means though that scaling teaching would require scaling the creation of knowledgeable people, which is the very thing we are trying to solve. Can we make use of just one knowledgeable human, and somehow produce an artifact that can scalably “copy” this knowledge to other humans? -- that’s the exposition problem. (This framing is basically Bloom’s 2 sigma problem.)
Ah, I see! My immediate instinct is to say “okay, design a narrow AI to play the role of a teacher” but 1. a narrow AI may not be able to do well with that, though maybe a fine-tuned language model could after it becomes possible to guarantee truthfulness, and 2. that’s really not the point lol.
There is something to be said for interactivity though. In my experience, the best explanations I’ve seen have been explorable explanations, like the famous one about the evolution of cooperation. Perhaps we can look into what makes those good and how to design them more effectively.
Also, something like a market for explanations might be desirable. What you’d need is three kinds of actors: testers seeking people who possess a certain skill; students seeking to learn the skill; and explainers who generate explorable explanations which teach the skill. Testers reward the students who do best at the skill, and students reward the explanations which seem to improve their success with testers the most. Somehow I feel like that could be massaged into a market where the best explanations have the highest values. (Failure mode: explainers bribe testers to design tests in such a way that students who learned from their explanations do best.)