I think this is completely correct, and have been thinking along similar lines lately.
The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.
The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
I’ll try to post more content here too, and would be happy to volunteer to moderate if people feel that’s useful/needed.
Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
This seems right to me. It seems to me that “moderation” in this sense is perhaps better phrased as “active enforcement of community norms of good discourse”, not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all “arguments you want the community to be aware of” to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise “single point of coordination-marked” posts to LW.)
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow “more important”. In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. “Someone is wrong on less wrong” seems to me to be an actually worth fixing; it seems like that’s how we make a community that is capable of vetting arguments.
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow “more important”. In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. “Someone is wrong on less wrong” seems to me to be an actually worth fixing; it seems like that’s how we make a community that is capable of vetting arguments.
Participating in online discussions tends to reduce one’s attention span. There’s the variable reinforcement factor. There’s also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)
These effects are so strong that if I stay away from the internet for a few days (“internet fast”), my attention span increases dramatically. And if I’ve posted comments online yesterday, it’s hard for me to focus today—there’s always something in the back of my mind that wants to check & see if anyone’s responded. I need to refrain from making new comments for several days before I can really focus.
Lotsofpeople have noticed that online discussions sap their productivity this way. And due to the affect heuristic, they downgrade the importance & usefulness of online discussions in general. I think this inspired Patri’s Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting… so if video games are a distracting waste of time, Less Wrong must also be, right?
Except that doesn’t follow. Online content can be really valuable to read. Bloggers don’t have an incentive to pad their ideas the way book authors do. And they write simply instead of unnecessarily obfuscating like academics. (Somerelateddiscussion.)
Participating in discussions online is often high leverage. The ratio of readers to participants in online discussions can be quite high. Some numbers from the LW-sphere that back this up:
In 2010, Kevin created a thread where he asked lurkers to say hi. The thread generated 617 comments.
Here’s a relatively obscure comment of mine that was voted to +2. But it was read by at least 135 logged-in users. Since 54+% of the LW readership has never registered an account, this obscure comment was likely read by 270+ people. A similar case study—deeply threaded comment posted 4 days after a top-level post, read by at least 22 logged-in users.
Based on this line of reasoning, I’m currently working on the problem of preserving focus while participating in online discussions. I’ve got some ideas, but I’d love to hear thoughts from anyone who wants to spend a minute brainstorming.
Regarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work (my review) talks more about it. I’m not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent is a movement that deals with this topic and has some good content/resources.
I think it’s important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you’re not internetting mindlessly out of impulse. If willpower is an issue, try Self Control, or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs.
I’m no expert, but suspect that LW could do a better job designing for Time Well Spent.
Remove things on the sidebar like “Recent Posts” and “Recent Comments” (first item on Time Well Spent checklist). They tempt you to click around and stay on longer. If you want to see new posts or comments, you could deliberately choose to click on a link that takes you to a new webpage that shows you those things, rather than always having them shoved in your face.
Give users the option of “only be able to see things in your inbox once per day”. That way, you’re not tempted to constantly be checking it. (second item on checklist; letting users disconnect)
I think it’d be cool to let people display their productivity goals on their profile. Eg. “I check LW Tuesday and Thursday nights, and Sunday mornings. I intend to be working during these hours.” That way perhaps you won’t feel obligated to respond to people when you should be working. Furthermore, there’s the social reward/punishment aspect of it—“Hey! You posted this comment at 4:30 on a Wednesday—weren’t you supposed to be working then?”
These are just some initial thoughts. I know that we can come up with much more.
Tangential comment: a big thought of mine has always been that LW (and online forums in general) lead to the same conversation threads being repeated. Ie. the topic of “how to reduce internet distractions” surely has been discussed here before. It’d be cool if there was a central place for that discussion, it was organized well into some type of community wiki. I envision there being much less “duplication” this way. I also envision a lot more time being spent on “organizing current thoughts” as opposed to “thinking new thoughts”. (These thoughts are very rough and not well composed.)
I think this inspired Patri’s Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting… so if video games are a distracting waste of time, Less Wrong must also be, right?
I’ve been thinking about Patri’s post for a long time, because I’ve found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They’re focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It’s easier to tell if what you’re doing works sooner.
That’s often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It’s just taken longer to tell if that’s succeeded.
Patri’s post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that’s value added to advancing AI safety that wouldn’t have existed if LW never started. CFAR didn’t exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn’t get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion.
What I’ve read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.
I tend to find discussions in comments unhelpful, but enjoy discussions spread out over responding posts. If someone takes the time to write something of length and quality sufficient that they are willing to write it as a top-level post to their blog/etc. then it’s more often worth reading to me. My time is valuable, comments are cheap, so I rather read things the author invested thought in writing.
(I recognize the irony that I’m participating in this discussion right now, but this particular discussion seems an unusually good chance to spread my thinking on this topic.)
If anyone wants to collaborate in tackling the focus problem, send me a personal message with info on how to contact you. Maybe we can get some kind of randomized trial going.
This seems right to me. It seems to me that “moderation” in this sense is perhaps better phrased as “active enforcement of community norms of good discourse”, not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be.
I agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don’t remember specific examples.
In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. “Someone is wrong on less wrong” seems to me to be an actually worth fixing; it seems like that’s how we make a community that is capable of vetting arguments.
Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it’s pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there’s no way to cause that to happen.
I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as:
Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main
Writing up a set of discussion norms that people can link to when saying “please do X”
Talking to people and observing the state of the community in order to improve the norms
Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion
Actually ban trolls
Manage some ongoing development to improve site features
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it’s pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there’s no way to cause that to happen.
One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:
A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.
That shows up in an inbox of everyone in the SR until one of them clicks an “I’ve got this” button.
The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.
The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it’s draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they’re given a badge.
A similar system is something like Stack Overflow’s review queue, which gives users the ability to review more complicated things as their karma gets higher, and thus offloads basic administrative duties to users in a way that scales fairly well. But while SO is mostly concerned with making sure edits aren’t vandalizing the post and garbage gets cleaned up, I think LW benefits from taking a more transformative approach towards posters. (If we have a lot of material that identifies errors of thought and can correct those, then let’s use it!)
Also happy to join. And I’m happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it’s useful.
Yes. I wonder if there are somehow spreadable habits of thinking (or of “reading while digesting/synethesizing/blog posting”, or …) that could themselves be written up, in order to create more ability from more folks to add good content.
Probably too meta / too clever an idea, but may be worth some individual brainstorms?
I’ve been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I’ve noticed it’s not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.
Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn’t get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it’s far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.
(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)
[1] I abbreviate “Effective Altruism Forum” as “EA Forum”, rather than “EAF”, as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don’t want people to get confused between the two.
The EA forum has less of a reputation, so knowing about it selects better for various virtues
Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette
The EA forum doesn’t have the “this site is about rationality, I have opinions and I agree with them, so they’re rational, so I should post about them here” problem
I think this is completely correct, and have been thinking along similar lines lately.
The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.
The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)
I’ll try to post more content here too, and would be happy to volunteer to moderate if people feel that’s useful/needed.
This seems right to me. It seems to me that “moderation” in this sense is perhaps better phrased as “active enforcement of community norms of good discourse”, not necessarily by folks with admin privileges as such. Also simply explicating what norms are expected, or hashing out in common what norms there should be. (E.g., perhaps there should be a norm of posting all “arguments you want the community to be aware of” to Less Wrong or another central place, and of keeping up with all highly upvoted / promoted / otherwise “single point of coordination-marked” posts to LW.)
I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow “more important”. In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. “Someone is wrong on less wrong” seems to me to be an actually worth fixing; it seems like that’s how we make a community that is capable of vetting arguments.
Participating in online discussions tends to reduce one’s attention span. There’s the variable reinforcement factor. There’s also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)
These effects are so strong that if I stay away from the internet for a few days (“internet fast”), my attention span increases dramatically. And if I’ve posted comments online yesterday, it’s hard for me to focus today—there’s always something in the back of my mind that wants to check & see if anyone’s responded. I need to refrain from making new comments for several days before I can really focus.
Lots of people have noticed that online discussions sap their productivity this way. And due to the affect heuristic, they downgrade the importance & usefulness of online discussions in general. I think this inspired Patri’s Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting… so if video games are a distracting waste of time, Less Wrong must also be, right?
Except that doesn’t follow. Online content can be really valuable to read. Bloggers don’t have an incentive to pad their ideas the way book authors do. And they write simply instead of unnecessarily obfuscating like academics. (Some related discussion.)
Participating in discussions online is often high leverage. The ratio of readers to participants in online discussions can be quite high. Some numbers from the LW-sphere that back this up:
In 2010, Kevin created a thread where he asked lurkers to say hi. The thread generated 617 comments.
77% of respondents to the Less Wrong survey have never posted a comment. (And this is a population of readers who were sufficiently engaged to take the survey!)
Here’s a relatively obscure comment of mine that was voted to +2. But it was read by at least 135 logged-in users. Since 54+% of the LW readership has never registered an account, this obscure comment was likely read by 270+ people. A similar case study—deeply threaded comment posted 4 days after a top-level post, read by at least 22 logged-in users.
Based on this line of reasoning, I’m currently working on the problem of preserving focus while participating in online discussions. I’ve got some ideas, but I’d love to hear thoughts from anyone who wants to spend a minute brainstorming.
Regarding the idea that online discussion hurts attention span and productivity, I agree for the reasons you say. The book Deep Work (my review) talks more about it. I’m not too familiar with the actual research, but my mind seems to recall that the research supports this idea. Time Well Spent is a movement that deals with this topic and has some good content/resources.
I think it’s important to separate internet time from non-internet time. The author talks about this in Deep Work. He recommends that internet time be scheduled in advance, that way you’re not internetting mindlessly out of impulse. If willpower is an issue, try Self Control, or going somewhere without internet. I sometimes find it useful to lock my phone in the mailbox downstairs.
I’m no expert, but suspect that LW could do a better job designing for Time Well Spent.
Remove things on the sidebar like “Recent Posts” and “Recent Comments” (first item on Time Well Spent checklist). They tempt you to click around and stay on longer. If you want to see new posts or comments, you could deliberately choose to click on a link that takes you to a new webpage that shows you those things, rather than always having them shoved in your face.
Give users the option of “only be able to see things in your inbox once per day”. That way, you’re not tempted to constantly be checking it. (second item on checklist; letting users disconnect)
I think it’d be cool to let people display their productivity goals on their profile. Eg. “I check LW Tuesday and Thursday nights, and Sunday mornings. I intend to be working during these hours.” That way perhaps you won’t feel obligated to respond to people when you should be working. Furthermore, there’s the social reward/punishment aspect of it—“Hey! You posted this comment at 4:30 on a Wednesday—weren’t you supposed to be working then?”
These are just some initial thoughts. I know that we can come up with much more.
Tangential comment: a big thought of mine has always been that LW (and online forums in general) lead to the same conversation threads being repeated. Ie. the topic of “how to reduce internet distractions” surely has been discussed here before. It’d be cool if there was a central place for that discussion, it was organized well into some type of community wiki. I envision there being much less “duplication” this way. I also envision a lot more time being spent on “organizing current thoughts” as opposed to “thinking new thoughts”. (These thoughts are very rough and not well composed.)
I’ve been thinking about Patri’s post for a long time, because I’ve found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They’re focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It’s easier to tell if what you’re doing works sooner.
That’s often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It’s just taken longer to tell if that’s succeeded.
Patri’s post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that’s value added to advancing AI safety that wouldn’t have existed if LW never started. CFAR didn’t exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn’t get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion.
What I’ve read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.
I tend to find discussions in comments unhelpful, but enjoy discussions spread out over responding posts. If someone takes the time to write something of length and quality sufficient that they are willing to write it as a top-level post to their blog/etc. then it’s more often worth reading to me. My time is valuable, comments are cheap, so I rather read things the author invested thought in writing.
(I recognize the irony that I’m participating in this discussion right now, but this particular discussion seems an unusually good chance to spread my thinking on this topic.)
If anyone wants to collaborate in tackling the focus problem, send me a personal message with info on how to contact you. Maybe we can get some kind of randomized trial going.
I agree that there should be much more active enforcement of good norms than heavy-handed moderation (banning etc.), but I have a cached thought that lack of such moderation was a significant part of why I lost interest in lesswrong.com, though I don’t remember specific examples.
Completely agree. One particularly important mechanism, IMO, is that brains tend to pay substantially more attention to things they perceive other humans caring about. I know I write substantially better code when someone I respect will be reviewing it in detail, and that I have trouble rousing the same motivation without that.
Thinking about this more, I think that moderator status matters more than specific moderator privilege. Without one or more people like this, it’s pretty difficult to actually converge on new norms. I could make some posts suggesting new norms for e.g. posting to main vs. discussion, but without someone taking an ownership role in the site there’s no way to cause that to happen.
I suspect one of the reasons people have moved discussions to their own blogs or walls is because they feel like they actually can affect the norms there. Unofficial status works (cf. Eliezer, Yvain) but is not very scalable–it requires people willing to spend a lot of time writing content as well as thinking about, discussing, and advocating for community norms. I think you, Ben, Sarah etc. committing to posting here makes a lesswrong revival more likely to succeed, and would place even higher odds if 1 or more people committed to spending a significant amount of time on work such as:
Clarifying what type of content is encouraged on less wrong, and what belongs in discussion vs. main
Writing up a set of discussion norms that people can link to when saying “please do X”
Talking to people and observing the state of the community in order to improve the norms
Regularly reaching out to other writers/cross-posting relevant content, along with the seeds of a discussion
Actually ban trolls
Manage some ongoing development to improve site features
One idea that I had, that I still think is good, is essentially something like the Sunshine Regiment. The minimal elements are:
A bat-signal where you can flag a comment for attention by someone in the Sunshine Regiment.
That shows up in an inbox of everyone in the SR until one of them clicks an “I’ve got this” button.
The person who took on the post writes an explanation of how they could have written the post better / more in line with community norms.
The basic idea here is that lots of people have the ability to stage these interventions / do these corrections, but (a) it’s draining and not the sort of thing that a lot of people want to do more than X times a month, and (b) not the sort of thing low-status but norm-acclimated members of the community feel comfortable doing unless they’re given a badge.
A similar system is something like Stack Overflow’s review queue, which gives users the ability to review more complicated things as their karma gets higher, and thus offloads basic administrative duties to users in a way that scales fairly well. But while SO is mostly concerned with making sure edits aren’t vandalizing the post and garbage gets cleaned up, I think LW benefits from taking a more transformative approach towards posters. (If we have a lot of material that identifies errors of thought and can correct those, then let’s use it!)
Happy to join Sunshine Regiment if you can set it up.
Also happy to join. And I’m happy to commit to a significant amount of moderation (e.g. 10/hours a week for the next 3 months) if you think it’s useful.
Yes. I wonder if there are somehow spreadable habits of thinking (or of “reading while digesting/synethesizing/blog posting”, or …) that could themselves be written up, in order to create more ability from more folks to add good content.
Probably too meta / too clever an idea, but may be worth some individual brainstorms?
I’ve been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I’ve noticed it’s not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.
Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn’t get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it’s far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.
(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)
[1] I abbreviate “Effective Altruism Forum” as “EA Forum”, rather than “EAF”, as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don’t want people to get confused between the two.
Some guesses:
The EA forum has less of a reputation, so knowing about it selects better for various virtues
Interest in altruism probably correlates with pro-social behavior in general, e.g. netiquette
The EA forum doesn’t have the “this site is about rationality, I have opinions and I agree with them, so they’re rational, so I should post about them here” problem