Liked this essay and upvoted, but there’s one part that feels a little too strong:
There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it. [...]
It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.
Suppose that I have studied a particular field X, and this has given me a particular set of intuitions about how things work. They’re not based on any specific claim that I could cite directly, but rather a more vague feeling of “based on how I understand things to generally work, this seems to make the most sense to me”.
I now have an experience E. The combination of E and my intuitions gathered from studying X cause me to form a particular belief. However, if I had not studied X, I would have interpreted the experience differently, and would not have formed the belief.
If I now want to communicate the reasons behind my belief to LW readers, and expect many readers to be unfamiliar with X, I cannot simply explain that E happened to me and therefore I believe this. That would be an accurate account of the causal history, but it would fail to communicate many of the actual reasons. I could also say that “based on studying X, I have formed the following intuition”, but that wouldn’t really communicate the actual generators of my belief either.
But what I can do is to try to query my intuition and try to translate it into the kind of a framework that I expect LW readers to be more familiar with. E.g. if I have intuitions from psychology, I can find analogous concepts from machine learning, and express my idea in terms of those. Now this isn’t quite the same as just writing the bottom line first, because sometimes when I try to do this, I realize that there’s some problem with my belief and then I actually change my mind about what I believe. But from the inside it still feels a lot like “persuasion”, because I am explicitly looking for ways of framing and expressing my belief that I expect my target audience to find persuasive.
This is definitely the use-case where “explain how you came to think Y” is hardest; there’s a vague ball of intuitions playing a major role in the causal pathway. On the other hand, making those intuitions more legible (e.g. by using analogies between psych and ML) tends to have unusually high value.
I suspect that, from Eliezer’s perspective, a lot of sequences came from roughly this process. He was trying to work back through his own pile of intuitions and where they came from, then serialize and explain as much of it as possible. It’s been a generator for a lot of my own writing as well—for instance, the Constraints/Scarcity posts came from figuring out how to make a broad class of intuitions legible, and the review of Design Principles of Biological Circuits came from realizing that the book had been upstream of a bunch of my intuitions about AI. It’s not coincidence that those were relatively popular posts—figuring out the logic which drives some intuitions, and making that logic legible, is valuable. It allows us to more directly examine and discuss the previously-implicit/intuitive arguments.
I wouldn’t quite liken it to persuasion. I think the thing you’re trying to point to is that the author does most of the work of crossing the inductive gap. In general, when two people communicate, either one can do the work of translating into terms the other person understands (or they can split that work, or a third party can help, etc… the point is that someone has to do it.). When trying to persuade someone, that burden is definitely on the persuader. But that’s not exclusively a feature of persuasion—it’s a useful habit to have in general, to try to cross most of the inductive gap oneself, and it’s important for clear writing in general. The goal is still to accurately convey some idea/intuition/information, not to persuade the reader that the idea/intuition/information is right.
Liked this essay and upvoted, but there’s one part that feels a little too strong:
Suppose that I have studied a particular field X, and this has given me a particular set of intuitions about how things work. They’re not based on any specific claim that I could cite directly, but rather a more vague feeling of “based on how I understand things to generally work, this seems to make the most sense to me”.
I now have an experience E. The combination of E and my intuitions gathered from studying X cause me to form a particular belief. However, if I had not studied X, I would have interpreted the experience differently, and would not have formed the belief.
If I now want to communicate the reasons behind my belief to LW readers, and expect many readers to be unfamiliar with X, I cannot simply explain that E happened to me and therefore I believe this. That would be an accurate account of the causal history, but it would fail to communicate many of the actual reasons. I could also say that “based on studying X, I have formed the following intuition”, but that wouldn’t really communicate the actual generators of my belief either.
But what I can do is to try to query my intuition and try to translate it into the kind of a framework that I expect LW readers to be more familiar with. E.g. if I have intuitions from psychology, I can find analogous concepts from machine learning, and express my idea in terms of those. Now this isn’t quite the same as just writing the bottom line first, because sometimes when I try to do this, I realize that there’s some problem with my belief and then I actually change my mind about what I believe. But from the inside it still feels a lot like “persuasion”, because I am explicitly looking for ways of framing and expressing my belief that I expect my target audience to find persuasive.
This is definitely the use-case where “explain how you came to think Y” is hardest; there’s a vague ball of intuitions playing a major role in the causal pathway. On the other hand, making those intuitions more legible (e.g. by using analogies between psych and ML) tends to have unusually high value.
I suspect that, from Eliezer’s perspective, a lot of sequences came from roughly this process. He was trying to work back through his own pile of intuitions and where they came from, then serialize and explain as much of it as possible. It’s been a generator for a lot of my own writing as well—for instance, the Constraints/Scarcity posts came from figuring out how to make a broad class of intuitions legible, and the review of Design Principles of Biological Circuits came from realizing that the book had been upstream of a bunch of my intuitions about AI. It’s not coincidence that those were relatively popular posts—figuring out the logic which drives some intuitions, and making that logic legible, is valuable. It allows us to more directly examine and discuss the previously-implicit/intuitive arguments.
I wouldn’t quite liken it to persuasion. I think the thing you’re trying to point to is that the author does most of the work of crossing the inductive gap. In general, when two people communicate, either one can do the work of translating into terms the other person understands (or they can split that work, or a third party can help, etc… the point is that someone has to do it.). When trying to persuade someone, that burden is definitely on the persuader. But that’s not exclusively a feature of persuasion—it’s a useful habit to have in general, to try to cross most of the inductive gap oneself, and it’s important for clear writing in general. The goal is still to accurately convey some idea/intuition/information, not to persuade the reader that the idea/intuition/information is right.