How To Write Quickly While Maintaining Epistemic Rigor
There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.
This post is about how to avoid that, without sacrificing good epistemics.
There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.
I claim that this promotes better epistemics overall than always researching everything in depth.
Why?
It’s About The Process, Not The Conclusion
Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests—maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.
Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.
Correctly Conveying Uncertainty
Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.
An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.
Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much—we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.
On the other hand… what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.
Point is: the amount of uncertainty I should assign depends on the details of my process. It depends on the path by which I reached the conclusion.
This carries over to my writing: if I want to accurately convey my uncertainty, then I need to accurately convey my process. Those details are relevant to how much certainty my readers should put in the conclusion.
So Should I Stop Researching My Claims?
No. Obviously researching claims still has lots of value. But you should not let uncertainty stop you from writing things up and sharing them. Just try to accurately convey your uncertainty, by communicating the process.
Bad Habits
It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.
There’s a closely related sub-habit in which people try to only claim things with very high certainty. This makes sense in a persuasion/debate frame—any potential loophole could be exploited by “the other side”. Arguments are soldiers; we must show no weakness.
Good epistemic habits include living with uncertainty. Good epistemic discourse includes making uncertain statements, and accurately conveying our uncertainty in them. Trying to always research things to high confidence, and never sharing anything without high confidence, is a bad habit.
Takeaway
So you have some ideas which might make cool LessWrong posts, or something similar, but you’re not really confident enough that they’re right to put them out there. My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look. In the process, you’ll probably convey your own actual level of uncertainty, which is exactly the right thing to do.
… and of course don’t stop researching interesting claims. Just don’t let that be a bottleneck to sharing your ideas.
Addendum: I’m worried that people will read this post think “ah, so that’s the magic bullet for a LW post”, then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one’s thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don’t need all of those in every post). Especially clear explanation—if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.
- How To Get Into Independent Research On Alignment/Agency by 19 Nov 2021 0:00 UTC; 352 points) (
- Epistemic Legibility by 9 Feb 2022 18:10 UTC; 305 points) (
- Call For Distillers by 4 Apr 2022 18:25 UTC; 206 points) (
- My current thoughts on the risks from SETI by 15 Mar 2022 17:18 UTC; 128 points) (
- Epistemic Legibility by 21 Mar 2022 19:18 UTC; 73 points) (EA Forum;
- Call For Distillers by 6 Apr 2022 3:03 UTC; 70 points) (EA Forum;
- Prizes for the 2021 Review by 10 Feb 2023 19:47 UTC; 69 points) (
- Voting Results for the 2021 Review by 1 Feb 2023 8:02 UTC; 66 points) (
- My current thoughts on the risks from SETI by 15 Mar 2022 17:17 UTC; 47 points) (EA Forum;
- [ASoT] Finetuning, RL, and GPT’s world prior by 2 Dec 2022 16:33 UTC; 44 points) (
- [ASoT] Simulators show us behavioural properties by default by 13 Jan 2023 18:42 UTC; 35 points) (
- Becoming Stronger as Epistemologist: Introduction by 15 Feb 2022 6:15 UTC; 30 points) (
- How to Do Research. v1 by 8 Sep 2022 1:08 UTC; 29 points) (
- What is the optimal frontier for due diligence? by 8 Sep 2023 18:20 UTC; 29 points) (
- What has helped you write better? by 12 Nov 2021 18:54 UTC; 24 points) (EA Forum;
- Johannes’ Biography by 3 Jan 2024 13:27 UTC; 23 points) (
- Why write more: improve your epistemics, self-care, & 28 other reasons by 17 Dec 2022 19:25 UTC; 22 points) (
- Apparently winning by the bias of your opponents by 28 Nov 2021 13:20 UTC; 22 points) (
- Goal-directedness: my baseline beliefs by 8 Jan 2022 13:09 UTC; 21 points) (
- 24 Feb 2022 22:03 UTC; 19 points) 's comment on Learning By Writing by (
- Writing can save lives and get you an EA job: a list of reasons to write more EA content by 17 Dec 2022 19:07 UTC; 17 points) (EA Forum;
- How to Get Rationalist Feedback by 5 Oct 2023 2:03 UTC; 13 points) (
- Refine: what helped me write more? by 25 Oct 2022 14:44 UTC; 12 points) (
- Is Constructor Theory a useful tool for AI alignment? by 29 Nov 2022 12:35 UTC; 11 points) (
- 23 Feb 2022 8:19 UTC; 10 points) 's comment on Learning By Writing by (
- Learning as closing feedback loops by 17 Apr 2022 9:50 UTC; 9 points) (
- 10 Sep 2022 21:07 UTC; 6 points) 's comment on My emotional reaction to the current funding situation by (
- 13 Oct 2021 22:39 UTC; 5 points) 's comment on Common knowledge about Leverage Research 1.0 by (
- Summary: “How to Write Quickly...” by John Wentworth by 11 Apr 2022 23:26 UTC; 4 points) (
- 1 Apr 2023 20:59 UTC; 3 points) 's comment on Repairing the Effort Asymmetry by (
- Superintelligent Introspection: A Counter-argument to the Orthogonality Thesis by 29 Aug 2021 4:53 UTC; 3 points) (
- 30 Jan 2024 11:27 UTC; 3 points) 's comment on How to write better? by (
- 4 Oct 2022 23:39 UTC; 3 points) 's comment on Feature request: Filter by read/ upvoted by (
- 14 Jan 2023 20:25 UTC; 2 points) 's comment on Epistemic Legibility by (EA Forum;
- 4 Oct 2022 17:33 UTC; 2 points) 's comment on Feature request: Filter by read/ upvoted by (
- 29 Feb 2024 14:22 UTC; 1 point) 's comment on Dual Wielding Kindle Scribes by (
- The ’Why’s of an International Auxiliary Language (IAL part 1) by 8 Feb 2022 4:39 UTC; -1 points) (
I upvoted this highly for the review. I think of this as a canonical reference post now for the sort of writing I want to see on LessWrong. This post identified an important problem I’ve seen a lot of people struggle with, and writes out clear instructions for it.
I guess a question I have is “how many people read this and had it actually help them write more quickly?”. I’ve personally found the post somewhat helpful, but I think mostly already had the skill.
I think I sorta implicitly already knew what this post is saying, and thus the value of this post for me was in crystalizing that implicit knowledge into explicit knowledge that I could articulate / reflect on / notice / etc.
I can’t recall a situation where this post “actually helped me write more quickly”. I vaguely recall that there were times that this post popped into my head while thinking about whether or not to write something at all, and maybe how to phrase and structure it.
I think in my case it’s more likely the post helped me write more rigorously, rather than quickly. i.e. by default I write quickly without much rigor, and this post pointed a cheap-ish way to include more epistemic handholds.
I think this post does two things well:
helps lower the internal barrier for what is “worth posting” on LW
helps communicate the epistemic/communication norms that define good rationalish writing
Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational.