I read Duncan’s posts on concentration of force and stag hunts. I noticed that a lot of the tug-of-war he describes seems to stem from the fact that the object-level stuff about a post and the meta-level stuff (by which I mean rationality) of the post. It also takes the strong position that eliminating the least-rational is the way to improve LessWrong in the dimension the posts are about.
I feel we can do more to make getting better at rationality easier through redirecting some of our efforts. A few ideas follow.
I want to be able to emphasize how to make a great comment, and therefore contribution to the ongoing discussion. Some people have the norm of identifying good comments, but that doesn’t help as much with how to make them, or what the thought process looks like. It would also be tedious to do this for every comment, because the workload would be impossible.
What if there were some kind of nomination process, where if I see a good comment I could flag it in such a way the author is notified that I would like to see a meta-comment about writing it in the first place?
I already enjoy meta-posts which explain other posts, and the meta-comments during our annual review where people comment on their own posts. The ability to easily request such a thing in a way that doesn’t compete for space with other commentary would be cool.
What about a parallel kind of curation, where posts with a special R symbol or something are curated by the mods (maybe plus other trusted community members) are curated exclusively on their rationality merits? I mention this because the curation process is more of the general-intellectual-pipeline criteria now, of which rationality is only a part.
My reasoning here is that I wish it were easier to find great examples to follow. It would be good to have a list of posts that were “display rationality in your post the way these posts display rationality” to look up to.
It would be nice if we had a way to separate what a post was about from the rationality displayed by the post. Maybe something like the Alignment Forum arrangement, where there is a highly-technical version of the post and a regular public version of the post, but we replace the highly technical discussion with the rationality of the post.
Another comparison would be the Wikipedia talk pages, where the page has a public face but the talk page dissecting the contents requires navigating to specifically.
My reasoning here is that when reading a post and its comments, the subject of the post, the quality of the post on regular stylistic grounds, and the quality of the post on rationality grounds all compete for my bandwidth. Creating a specific zone where attention can be focused exclusively on the rationality elements will make it easier to identify where the problems are, and capitalize on the improvements thereby.
In sum: the default view of a post should be about the post. We should have a way to be able to only look at and comment on the rationality aspects.
For the vast majority of posts, I don’t expect nor want a separation between the rationality of the post and the target of the post. This is a place to rationally discuss rationalist topics, and if either side is lacking for a post, we should comment on and improve it.
For the few posts I think you’re talking about (posts about the communities and organizations loosely related to or with some membership overlap with the site), I might just recommend a tag and filtering for those of us who don’t care very much.
I also have a notion this would help with things like the renewal of old content by making it incremental. For example, there has been a low-key wish for the Sequences to be revised and updated, but they are huge and this has proved too daunting a task for anyone to volunteer to tackle by themselves, and Eliezer is a busy man. With a tool similar to this, the community could divide up the work into comment-size increments, and once a critical mass has been reached someone can transform the post into an updated version without carrying the whole burden themselves. Also solves the problem of being too dependent on one person’s interpretations.
I read Duncan’s posts on concentration of force and stag hunts. I noticed that a lot of the tug-of-war he describes seems to stem from the fact that the object-level stuff about a post and the meta-level stuff (by which I mean rationality) of the post. It also takes the strong position that eliminating the least-rational is the way to improve LessWrong in the dimension the posts are about.
I feel we can do more to make getting better at rationality easier through redirecting some of our efforts. A few ideas follow.
I want to be able to emphasize how to make a great comment, and therefore contribution to the ongoing discussion. Some people have the norm of identifying good comments, but that doesn’t help as much with how to make them, or what the thought process looks like. It would also be tedious to do this for every comment, because the workload would be impossible.
What if there were some kind of nomination process, where if I see a good comment I could flag it in such a way the author is notified that I would like to see a meta-comment about writing it in the first place?
I already enjoy meta-posts which explain other posts, and the meta-comments during our annual review where people comment on their own posts. The ability to easily request such a thing in a way that doesn’t compete for space with other commentary would be cool.
What about a parallel kind of curation, where posts with a special R symbol or something are curated by the mods (maybe plus other trusted community members) are curated exclusively on their rationality merits? I mention this because the curation process is more of the general-intellectual-pipeline criteria now, of which rationality is only a part.
My reasoning here is that I wish it were easier to find great examples to follow. It would be good to have a list of posts that were “display rationality in your post the way these posts display rationality” to look up to.
It would be nice if we had a way to separate what a post was about from the rationality displayed by the post. Maybe something like the Alignment Forum arrangement, where there is a highly-technical version of the post and a regular public version of the post, but we replace the highly technical discussion with the rationality of the post.
Another comparison would be the Wikipedia talk pages, where the page has a public face but the talk page dissecting the contents requires navigating to specifically.
My reasoning here is that when reading a post and its comments, the subject of the post, the quality of the post on regular stylistic grounds, and the quality of the post on rationality grounds all compete for my bandwidth. Creating a specific zone where attention can be focused exclusively on the rationality elements will make it easier to identify where the problems are, and capitalize on the improvements thereby.
In sum: the default view of a post should be about the post. We should have a way to be able to only look at and comment on the rationality aspects.
For the vast majority of posts, I don’t expect nor want a separation between the rationality of the post and the target of the post. This is a place to rationally discuss rationalist topics, and if either side is lacking for a post, we should comment on and improve it.
For the few posts I think you’re talking about (posts about the communities and organizations loosely related to or with some membership overlap with the site), I might just recommend a tag and filtering for those of us who don’t care very much.
I also have a notion this would help with things like the renewal of old content by making it incremental. For example, there has been a low-key wish for the Sequences to be revised and updated, but they are huge and this has proved too daunting a task for anyone to volunteer to tackle by themselves, and Eliezer is a busy man. With a tool similar to this, the community could divide up the work into comment-size increments, and once a critical mass has been reached someone can transform the post into an updated version without carrying the whole burden themselves. Also solves the problem of being too dependent on one person’s interpretations.