Agree with these reasons this is hard. A few thoughts (this is all assuming you’re the sort of person who basically thinks the Review makes sense as a concept and want to participate, obviously this may not apply to Mark)
Re: Prestige: I don’t know if this helps, but to be clear, I expect to include good reviews in the Best of 2018 book itself. I’m personally hoping that each post comes with at least one review, and in the event that there are deeply substantive reviews those may be given top-billing equivalent. I’m not 100% sure what will happen with reviews in the online seqeunce.
(In fact, I expect reviews to be an potentially easier way to end up in the book than by writing posts, since the target area is more clearly specified.)
“It’s Hard to Review Posts”
This is definitely true. Often what needs reviewing is less like “author made an unsubstantiated claim or logical error” and more like “is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable? Does it contain subtle flaws? Are there better frames for carving up the world than the one in the post?”
This is a hard problem, and doing a good job is honestly harder than one month work of work. But, this seems like a quite important problem for LessWrong to be able to solve. I think a lot of this site’s value comes from people crystallizing ideas that shift one’s frame, in domains where evidence is hard to come by. “How to evaluate that?” feels like an essential question for us to figure out how to answer.
My best guess for now is for reviews to not try to fully answer “does this post check out?” (in cases where that depends on a lot of empirical questions that are hard to check, or where “is this the right ontology?” are hard to check). But, instead, to try to map out “what are the questions I would want answered, that would help me figure out if this post checked out?”
Often what needs reviewing is less like “author made an unsubstantiated claim or logical error” and more like “is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable?
I agree with this, but given that these posts were popular because lots of people thought they were true and important, deeming the entire worldview of the author flawed would also imply the worldview of the community was flawed as well. It’s certainly possible that the community’s entire worldview is flawed, but even if you believe that to be true, it would be very difficult to explain in a way that people would find believable.
Agree with these reasons this is hard. A few thoughts (this is all assuming you’re the sort of person who basically thinks the Review makes sense as a concept and want to participate, obviously this may not apply to Mark)
Re: Prestige: I don’t know if this helps, but to be clear, I expect to include good reviews in the Best of 2018 book itself. I’m personally hoping that each post comes with at least one review, and in the event that there are deeply substantive reviews those may be given top-billing equivalent. I’m not 100% sure what will happen with reviews in the online seqeunce.
(In fact, I expect reviews to be an potentially easier way to end up in the book than by writing posts, since the target area is more clearly specified.)
“It’s Hard to Review Posts”
This is definitely true. Often what needs reviewing is less like “author made an unsubstantiated claim or logical error” and more like “is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable? Does it contain subtle flaws? Are there better frames for carving up the world than the one in the post?”
This is a hard problem, and doing a good job is honestly harder than one month work of work. But, this seems like a quite important problem for LessWrong to be able to solve. I think a lot of this site’s value comes from people crystallizing ideas that shift one’s frame, in domains where evidence is hard to come by. “How to evaluate that?” feels like an essential question for us to figure out how to answer.
My best guess for now is for reviews to not try to fully answer “does this post check out?” (in cases where that depends on a lot of empirical questions that are hard to check, or where “is this the right ontology?” are hard to check). But, instead, to try to map out “what are the questions I would want answered, that would help me figure out if this post checked out?”
(Example of this includes Eli Tyre’s “Has there been a memetic collapse?” question, relating to Eliezer’s claims in Local Validity)
I agree with this, but given that these posts were popular because lots of people thought they were true and important, deeming the entire worldview of the author flawed would also imply the worldview of the community was flawed as well. It’s certainly possible that the community’s entire worldview is flawed, but even if you believe that to be true, it would be very difficult to explain in a way that people would find believable.
[edit: I re-read your comment and mostly retract mine, but am thinking about a new version of it]
Have you got authorization from authors/copyright holders to do a book compendium?
Everyone will get contacted about inclusion in the book with the opportunity to opt out.