There’s a common view that a researcher’s output should look like a bunch of results: “A, B, C therefore X, Y, Z”. But we also feel, subconsciously and correctly, that such output has an air of finality and won’t attract many comments. Look at musical subreddits for example—people posting their music get few comments, people asking for help get more. So when I post a finished result on LW and get few comments, the problem is on me. There must be a better way to write posts, less focused on answering questions and more on making questions as interesting as possible. But that’s easier said than done—I don’t think I have that skill.
*nods* I do think there is a lot of value in people just asking good questions, and that I would like to see more people doing that in the AI Alignment space.
There’s a common view that a researcher’s output should look like a bunch of results: “A, B, C therefore X, Y, Z”. But we also feel, subconsciously and correctly, that such output has an air of finality and won’t attract many comments. Look at musical subreddits for example—people posting their music get few comments, people asking for help get more. So when I post a finished result on LW and get few comments, the problem is on me. There must be a better way to write posts, less focused on answering questions and more on making questions as interesting as possible. But that’s easier said than done—I don’t think I have that skill.
See Writing that Provokes Comments.
(this whole concept is part of why I’m bullish on LW shifting to focus more on questions than posts)
*nods* I do think there is a lot of value in people just asking good questions, and that I would like to see more people doing that in the AI Alignment space.
Thanks, great post! And good comments too. Not sure how I missed it at the time.