None of this is meant to be anything like a final plan. But one thing that caused me to title this post “Peer Review” instead of a bunch of other things was to communicate “we are aspiring to be legibly good in a similar way to how science aspires to be legibly good.
Since part of the goal here is “be better than science” we won’t necessarily do the same, or even similar things. But this frames the approach differently that you’d see if our goal was simply to be a discussion forum that highlighted the most interesting or useful posts.
The longterm canon of LW should have particular qualities that distinguish it from other sites. I’m not 100% confident I could nail them down, but some off the cuff guesses:
importance
clarity
be subjected to criticism and scrutiny
ideally, be mathematically rigorous in a way that can be reliably, transparently built upon (although this depends on the subject matter)
The ethos of the site should shine through at several levels – it should be apparent in the final output, the underlying process should be visibly part of the output, and the naming / UI / design should reinforce it.
I’m not at all sure that there should be a formal process called “Peer Review”, but I did want to orient the process around “create a longterm, rigorous edifice of knowledge that can be built upon.”
Even though you say some things in the direction already, I want to harp on the distinction between what we can judge by looking at a post in itself and things which can and should only be judged by the test of time.
I am thinking of a particular essay (not on LW) about how peer review should not judge the significance of a work, only the accuracy, but I can’t find it. I think “something like that” is central to your point, since you
want something like a retrospective judgement about which essays have stood the test of time
also want to have more features to lower the bar.
The essay I am thinking of was about a publishing venue which was established with the explicit goal of peer reviewers judging the rigor of a paper but not its impact/significance, since this cannot be judged ahead of time and is not very relevant to whether work should be published (sort of a generalization of the way replications are harder to publish—facts are facts, and should be publishable regardless of how flashy they are). The scientist was complaining that a paper of theirs was rejected from that venue due to not being of sufficient interest to the readership. The question of impact had been allowed to creep back into the reviews.
I think there’s a very general phenomenon where a venue starts out “alive”—able to spontaneously generate ideas—and then produces some good stuff, which raises the expectations, killing off the spontaneity. This can happen with individuals or groups. Some people start new twitter accounts all the time because there is too much pressure to perform once an account has gotten many followers. Old LW split into Discussion and Main, and then Discussion split off discussion threads to indicate a lowered bar.
Um. I guess I’m not being very solution-oriented here.
The problem is, something seems to implicitly raise people’s standards as a place gets good, even if there are explicit statements to the contrary (like the rule stating peer reviewers should not judge impact). This can kill a place over time.
Carefully separating what can be judged in the moment vs only after the fact seems like a part of the solution.
Maybe you want to create a “comfortable sloshing mess” of relatively low signal-to-noise chatter which makes people feel comfortable to post, while also having the carefully determined canon which contains the value.
Obviously the “comfortable sloshing mess” is not distinguished primarily by its being bad—it should be good, but good along different dimensions than the canon. It should meet high standards of rigor along dimensions that are easy to judge from the text itself and not difficult or off-putting for writers to meet. There should be a “google docs comment peer review” for these aspects. (Maybe not **only** for these aspects, but somehow virtuously aligned with respect to these aspects?)
None of this is meant to be anything like a final plan. But one thing that caused me to title this post “Peer Review” instead of a bunch of other things was to communicate “we are aspiring to be legibly good in a similar way to how science aspires to be legibly good.
Since part of the goal here is “be better than science” we won’t necessarily do the same, or even similar things. But this frames the approach differently that you’d see if our goal was simply to be a discussion forum that highlighted the most interesting or useful posts.
The longterm canon of LW should have particular qualities that distinguish it from other sites. I’m not 100% confident I could nail them down, but some off the cuff guesses:
importance
clarity
be subjected to criticism and scrutiny
ideally, be mathematically rigorous in a way that can be reliably, transparently built upon (although this depends on the subject matter)
The ethos of the site should shine through at several levels – it should be apparent in the final output, the underlying process should be visibly part of the output, and the naming / UI / design should reinforce it.
I’m not at all sure that there should be a formal process called “Peer Review”, but I did want to orient the process around “create a longterm, rigorous edifice of knowledge that can be built upon.”
Even though you say some things in the direction already, I want to harp on the distinction between what we can judge by looking at a post in itself and things which can and should only be judged by the test of time.
I am thinking of a particular essay (not on LW) about how peer review should not judge the significance of a work, only the accuracy, but I can’t find it. I think “something like that” is central to your point, since you
want something like a retrospective judgement about which essays have stood the test of time
also want to have more features to lower the bar.
The essay I am thinking of was about a publishing venue which was established with the explicit goal of peer reviewers judging the rigor of a paper but not its impact/significance, since this cannot be judged ahead of time and is not very relevant to whether work should be published (sort of a generalization of the way replications are harder to publish—facts are facts, and should be publishable regardless of how flashy they are). The scientist was complaining that a paper of theirs was rejected from that venue due to not being of sufficient interest to the readership. The question of impact had been allowed to creep back into the reviews.
I think there’s a very general phenomenon where a venue starts out “alive”—able to spontaneously generate ideas—and then produces some good stuff, which raises the expectations, killing off the spontaneity. This can happen with individuals or groups. Some people start new twitter accounts all the time because there is too much pressure to perform once an account has gotten many followers. Old LW split into Discussion and Main, and then Discussion split off discussion threads to indicate a lowered bar.
Um. I guess I’m not being very solution-oriented here.
The problem is, something seems to implicitly raise people’s standards as a place gets good, even if there are explicit statements to the contrary (like the rule stating peer reviewers should not judge impact). This can kill a place over time.
Carefully separating what can be judged in the moment vs only after the fact seems like a part of the solution.
Maybe you want to create a “comfortable sloshing mess” of relatively low signal-to-noise chatter which makes people feel comfortable to post, while also having the carefully determined canon which contains the value.
Obviously the “comfortable sloshing mess” is not distinguished primarily by its being bad—it should be good, but good along different dimensions than the canon. It should meet high standards of rigor along dimensions that are easy to judge from the text itself and not difficult or off-putting for writers to meet. There should be a “google docs comment peer review” for these aspects. (Maybe not **only** for these aspects, but somehow virtuously aligned with respect to these aspects?)