I expect that if something like it is someday achieved, it’ll mostly be done the hard way through moderation, example-setting and simply trying as hard as possible to do the right thing until most people do the right thing most of the time.
But I also expect that the design of LessWrong on a software level will go a long way towards enabling, enforcing and encouraging the kinds of cultural norms you describe. There are plenty of examples of a website’s culture being heavily influenced by its design choices—Twitter’s 280-character limit and resulting punishment of nuance comes to mind. It seems probable that LessWrong’s design could be improved in ways that improve its culture.
So here are some of my own Terrible Ideas to improve LessWrong that I wouldn’t implement as they are but might be worth tweaking or prototyping in some form. (Having scanned the comments section, it seems that most of the changes I thought of have already been suggested, but I’ve decided to outline them alongside my reasoning anyway.)
Inline commenting. If a comment responds to a specific part of a post, it may be worth having it presented alongside that part of the post, so that vital context isn’t missed by the kinds of people who don’t read comments, or if a comment is buried deep under many others. These could be automatic, chosen by the author, vetoed by the author, chosen by voting, etc. Possibly allow different types of responses, such as verification, relevant evidence/counterevidence, missed context, counterclaims, etc.
Multiple voting axes. Having a 1D positive-negative scale obviously loses some information—some upvote to say “this post/comment is well-written”, “I agree with this post/comment”, “this post/comment contains valuable information”, “this post/comment should be ranked higher relative to others”, and pretty much any other form of positive feedback. Downvotes might be given for different reasons again—few would upvote a comment merely for being civil, but downvoting comments for being uncivil is about as common as uncivil comments. Aggregating these into a total score isn’t terrible, but it does lead to behaviour like “upvoting then commenting to point out specific problems with the comment so as to avoid a social motte-and-bailey” like you describe. Commenting will always be necessary to point out specific flaws, but more general feedback like “this comment makes a valuable point but is poorly written and somewhat misleading” could be expressed more easily if ‘value’, ‘writing quality’ and ‘clarity’ were voted on separately.
Add a new metadata field to posts and comments for expressing epistemic status. Ideally, require it to be filled. Have a dropdown menu with a few preset options (the ones in the CFAR manual are probably a good start), but let people fill in what they like.
Allow people to designate particular sentences in posts/comments as being a “claim”, “hypothesis”, “conjecture”, “conclusion” (possibly linking to supporting claims), “crux”, “meta”, “assumption”, etc., integrating epistemic status into a post’s formatting. In my mind’s eye, this looks something like Medium’s ‘highlight’ feature, where a part of a post is shown in yellow if enough readers highlight it, except that different kinds of statements have different formatting/signposting. Pressing the “mark as assumption” button would be easier to do and to remember than typing “this is an assumption, not a statement of fact”, and I also expect it’d be easier to read. These could have a probability or probability distribution attached, if appropriate.
Most of these would make interacting with the site require extra effort, but (if done right) that’s a feature, not a bug. Sticking to solid cultural norms takes effort, while writing destructive posts and comments is easy if due process isn’t enforced.
Still, making these kinds of changes right is very difficult and would require extensive testing to ensure that the costs and incentives encourage cultural norms that are worth encouraging.
I commend your vision of LessWrong.
I expect that if something like it is someday achieved, it’ll mostly be done the hard way through moderation, example-setting and simply trying as hard as possible to do the right thing until most people do the right thing most of the time.
But I also expect that the design of LessWrong on a software level will go a long way towards enabling, enforcing and encouraging the kinds of cultural norms you describe. There are plenty of examples of a website’s culture being heavily influenced by its design choices—Twitter’s 280-character limit and resulting punishment of nuance comes to mind. It seems probable that LessWrong’s design could be improved in ways that improve its culture.
So here are some of my own Terrible Ideas to improve LessWrong that I wouldn’t implement as they are but might be worth tweaking or prototyping in some form.
(Having scanned the comments section, it seems that most of the changes I thought of have already been suggested, but I’ve decided to outline them alongside my reasoning anyway.)
Inline commenting. If a comment responds to a specific part of a post, it may be worth having it presented alongside that part of the post, so that vital context isn’t missed by the kinds of people who don’t read comments, or if a comment is buried deep under many others. These could be automatic, chosen by the author, vetoed by the author, chosen by voting, etc. Possibly allow different types of responses, such as verification, relevant evidence/counterevidence, missed context, counterclaims, etc.
Multiple voting axes. Having a 1D positive-negative scale obviously loses some information—some upvote to say “this post/comment is well-written”, “I agree with this post/comment”, “this post/comment contains valuable information”, “this post/comment should be ranked higher relative to others”, and pretty much any other form of positive feedback. Downvotes might be given for different reasons again—few would upvote a comment merely for being civil, but downvoting comments for being uncivil is about as common as uncivil comments.
Aggregating these into a total score isn’t terrible, but it does lead to behaviour like “upvoting then commenting to point out specific problems with the comment so as to avoid a social motte-and-bailey” like you describe. Commenting will always be necessary to point out specific flaws, but more general feedback like “this comment makes a valuable point but is poorly written and somewhat misleading” could be expressed more easily if ‘value’, ‘writing quality’ and ‘clarity’ were voted on separately.
Add a new metadata field to posts and comments for expressing epistemic status. Ideally, require it to be filled. Have a dropdown menu with a few preset options (the ones in the CFAR manual are probably a good start), but let people fill in what they like.
Allow people to designate particular sentences in posts/comments as being a “claim”, “hypothesis”, “conjecture”, “conclusion” (possibly linking to supporting claims), “crux”, “meta”, “assumption”, etc., integrating epistemic status into a post’s formatting. In my mind’s eye, this looks something like Medium’s ‘highlight’ feature, where a part of a post is shown in yellow if enough readers highlight it, except that different kinds of statements have different formatting/signposting. Pressing the “mark as assumption” button would be easier to do and to remember than typing “this is an assumption, not a statement of fact”, and I also expect it’d be easier to read.
These could have a probability or probability distribution attached, if appropriate.
Most of these would make interacting with the site require extra effort, but (if done right) that’s a feature, not a bug. Sticking to solid cultural norms takes effort, while writing destructive posts and comments is easy if due process isn’t enforced.
Still, making these kinds of changes right is very difficult and would require extensive testing to ensure that the costs and incentives encourage cultural norms that are worth encouraging.